forum_id
stringlengths 9
10
| sections
stringlengths 1.26k
174k
|
---|---|
ryelgY5eg | [{"section_index": "0", "section_name": "OPTIMAL BINARY AUTOENCODING WITH PAIRWISE CORRELATIONS", "section_text": "Akshay Balsubramani\nabalsubr@stanford.edu"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Consider a general autoencoding scenario, in which an algorithm learns a compression scheme fo independently, identically distributed (i.i.d.) V-dimensional bit vector data {(1), ..., x(n) }. Fo into an H-dimensional representation e(i), with H < V. It then decodes each e(i) back into a reconstructed example x(i) using some small amount of additional memory, and is evaluated on the quality of the reconstruction by the cross-entropy loss commonly used to compare bit vectors. A good autoencoder learns to compress the data into H bits so as to reconstruct it with low loss.\nWhen the loss is squared reconstruction error and the goal is to compress data in RV to RH, this is. often accomplished with principal component analysis (PCA), which projects the input data on the. top H eigenvectors of their covariance matrix (Bourlard & Kamp(1988);Baldi & Hornik(1989)) These eigenvectors in R constitute V H real values of additional memory needed to decode the. compressed data in RH back to the reconstructions in RV, which are linear combinations of the. eigenvectors. Crucially, this total additional memory does not depend on the amount of data n making it applicable when data are abundant..\nThis paper considers a similar problem, except using bit-vector data and the cross-entropy recon struction loss. Since we are compressing samples of i.i.d. V-bit data into H-bit encodings, a natural approach is to remember the pairwise statistics: the V H average correlations between pairs of bits in the encoding and decoding, constituting as much additional memory as the eigenvectors used in PCA The decoder uses these along with the H-bit encoded data, to produce V-bit reconstructions.\nWe show how to efficiently learn the autoencoder with the worst-case optimal loss in this scenario without any further assumptions, parametric or otherwise. It has some striking properties..\nThe decoding function is identical in form to the one used in a standard binary autoencoder with one hidden layer (Bengio et al.(2013a)) and cross-entropy reconstruction loss. Specifically, each bit of the decoding is the output of a logistic sigmoid artificial neuron of the encoded bits, with some learned weights w, E RH. This form emerges as the uniquely optimal decoding function, and is not assumed as part of any explicit model.\nWe show that the worst-case optimal reconstruction loss suffered by the autoencoder is convex in these decoding weights W = {wv}ve[v], and in the encoded representations E. Though it is not.\nMost of the work was done as a PhD student at UC San Diego"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We formulate learning of a binary autoencoder as a biconvex optimization problem vhich learns from the pairwise correlations between encoded and decoded bits Among all possible algorithms that use this information, ours finds the autoencoder hat reconstructs its inputs with worst-case optimal loss. The optimal decoder s a single layer of artificial neurons, emerging entirely from the minimax loss ninimization, and with weights learned by convex optimization. All this is reflected n competitive experimental results, demonstrating that binary autoencoding can oe done efficiently by conveying information in pairwise correlations in an optimal fashion."}, {"section_index": "3", "section_name": "1.1 NOTATION", "section_text": "The observed data and encodings can be written in matrix form, representing bits as 1:\nV 1+xv ) := 2 2 v=1"}, {"section_index": "4", "section_name": "1.2 PROBLEM SETUP", "section_text": "We find the autoencoding algorithm's best strategy in two parts. First, we find the optimal decoding. function of any encodings E given B, in Section|2 Then, we use the resulting optimal reconstruction. function to outline the best encoding procedure, i.e. one that finds the E, B that lead to the best. reconstruction, in Section|3.1] Combining these ideas yields an autoencoding algorithm in Section\njointly convex in both, the situation still admits a natural and efficient optimization algorithm in which the loss is alternately minimized in E and W while the other is held fixed. The algorithm is practical and performs well empirically, learning incrementally from minibatches of data in a stochastic optimization setting\n1 1V xn E Hxn X . (1) (1) n n e H H\nWith these definitions, the autoencoding problem we address can be precisely stated as two tasks. encoding and decoding. These share only the side information B. Our goal is to perform these steps so as to achieve the best possible guarantee on reconstruction loss, with no further assumptions. This. can be written as a zero-sum game of an autoencoding algorithm seeking to minimize loss against an. adversary, by playing encodings and reconstructions:.\nUsing X, algorithm plays (randomized) encodings E, resulting in pairwise correlations B : Using E and B, algorithm plays reconstructions X = (x(1),; ...; x(n)) E [-1, 1]Vn. . Given X,E,B, adversary plays X E [-1,1]Vxn to maximize reconstruction loss 1 n1 l(x(i),x\nTo incur low loss, the algorithm must use an E and B such that no adversary playing X can inflict. higher loss. The algorithm never sees X, which represents the worst the data could be given the algorithm's incomplete memory of it (E, B) and reconstructions (X)..\n3.2|(Algorithm[1), where its implementation and interpretation are specified. Further discussion and related work in Section4|are followed by more extensions of the framework in Section|5] Experiments in Section|6|show extremely competitive results with equivalent fully-connected autoencoders trained with backpropagation\nTo address the game of Section[1.2] we first assume E and B are fixed, and derive the optimal decoding rule given this information. We show in this section that the form of this optimal decoder is precisely the same as in a classical autoencoder: having learned a weight vector w, E RH for each. v E [V], the vth bit of each reconstruction x' is expressed as a logistic function of a w,-weighted combination of the H encoded bits e' - a logistic artificial neuron with weights w. The weight. vectors are learned by convex optimization. despite the nonconvexity of the transfer functions.\nTo develop this, we minimize the worst-case reconstruction error, where X is constrained by our prior knowledge that B = XET, i.e. Exy = b, Vv E [V]. This can be written as a function of E:\nWe solve this minimax problem for the optimal reconstructions played by the minimizing player ir (3, written as x(1)*. x(n)*\nThis tells us that the optimization problem of finding the minimax optimal reconstructions x(i) is. extremely convenient in several respects. The learning problem decomposes over the V bits in the. decoding, reducing to solving for a weight vector w* E RH for each bit v, by optimizing each bitwise. slack function. Given the weights, the optimal reconstruction of any example i can be specified by a layer of logistic sigmoid artificial neurons of its encoded bits, with w*T e(i) as the bitwise logits..\nV n (E) W*.E 2n i=1 v=1\nn L*(E) := min max x(1),..,x(n)E[-1,1]V x(1),...,x(n)E[-1,1]V, n i=1 VvE[V]: 1Ex,=by\nHaving computed the optimal decoding function in the previous section given any E and B, we now switch perspectives to the encoder, which seeks to compress the input data X into encoded representations E (from which B is easily calculated to pass to the decoder). We seek to find (E, B to ensure the lowest worst-case reconstruction loss after decoding; recall that this is L*(E) from (3)\nObserve that XET = B by definition, and that the encoder is given X. Therefore, by using Thm. and substituting b, = Ex, Vv E [V],\nSo it is convenient to define the feature distortion[[for any v E [V] with respect to W, between any example x and its encoding e:\nL(W,E) min min EE[-1,1]HXn 2n e(i)E[-1,1]H i=1 v=1\nwhich immediately yields the following result\nV Enc(x(i); w) := e(i)*(W) := argmin eE[-1,1]H v=1"}, {"section_index": "5", "section_name": "3.2 AN AUTOENCODER LEARNING ALGORITHM", "section_text": "Our ultimate goal is to minimize the worst-case reconstruction loss. As we have seen in (3) and (6). it is convex in the encoding E and in the decoding parameters W, each of which can be fixed while. minimizing with respect to the other. This suggests a learning algorithm that alternately performs twc steps: finding encodings E that minimize L(W, E) as in (6) with a fixed W, and finding decoding. parameters W*(E, B), as given in Algorithm1\nAlgorithm 1 Pairwise Correlation Autoencoder (PC-AE)\n'Noting that (wJ e) ~ |wJ e|, we see that W (e, x) ~ wJ e (sgn(wJ e) x). So the optimizer tends. to change e so that w,, e matches signs with x, motivating the name.\n3W(e,x) :=-xyw, e+(w,e\nFrom the above discussion, the best E given any decoding W, written as E*(w), solves the minimization\nObserve that the encoding function Enc((); W) can be efficiently computed to any desired pre- cision since the feature distortion W (e, x(i)) of each bit v is convex and Lipschitz in e; an L1 error of e can be reached in O(e-2) linear-time first-order optimization iterations. Note that the encodings need not be bits, and can be e.g. unconstrained E RH instead; the proof of Thm.1|assumes no structure on them, and the optimization will proceed as above but without projecting into the hypercube.\n1 Vi E [n] : [e(i)]t = ENC(x(i);Wt-1] Bt = XE n\nUpdate weight vectors w[t for each v E Vto minimize slack function, using encodings Et\nn 1 Vv E [V] : [w]t = arg min Jw w + n wERH i=1\nOur derivation of the encoding and decoding functions involves no model assumptions at all, only using the minimax structure and pairwise statistics that the algorithm is allowed to remember Nevertheless, the (en/de)coders can be learned and implemented efficiently..\nDecoding is a convex optimization in H dimensions, which can be done in parallel for each b v E [V]. This is relatively easy to solve in the parameter regime of primary interest when data ar abundant, in which H < V < n. Similarly, encoding is also a convex optimization problem i1 only H dimensions. If the data examples are instead sampled in minibatches of size n, they ca be encoded in parallel, with a new minibatch being sampled to start each epoch t. The number o examples n (per batch) is essentially only limited by nH, the number of compressed representation that fit in memory.\nSo far in this paper, we have stated our results in the transductive setting, in which all data are given together a priori, with no assumptions whatsoever made about the interdependences between the V features. However, PC-AE operates much more efficiently than this might suggest. Crucially the encoding and decoding tasks both depend on n only to average a function of x(i) or e(i) over i E [n], so they can both be solved by stochastic optimization methods that use first-order gradient information, like variants of stochastic gradient descent (SGD). We find it remarkable that the minimax optimal encoding and decoding can be efficiently learned by such methods, which do not scale computationally in n. Note that the result of each of these steps involves (n) outputs (E and X). which are all coupled together in complex ways.\nAs we noted previously, the objective function of the optimization is biconvex. This means that the. alternating minimization algorithm we specify is an instance of alternating convex search, shown. in that literature to converge under broad conditions (Gorski et al.(2007)). It is not guaranteed to converge to the global optimum, but each iteration will monotonically decrease the objective. function. In light of our introductory discussion, the properties and rate of such convergence would be interesting to compare to stochastic optimization algorithms for PCA, which converge efficiently. under broad conditions (Balsubramani et al.(2013); Shamir (2016)).\nThe basic game used so far has assumed perfect knowledge of the pairwise correlations, leading to equality constraints Vv E [V] : Ex, = by. This makes sense in PC-AE, where the encoding. phase of each epoch gives the exact Bt for the decoding phase. However, in other stochastic settings. as for denoising autoencoders (see Sec.5.2), it may be necessary to relax this constraint. A relaxed. constraint of Ex, - b,.. e exactly corresponds to an extra additive regularization term of. e w on the corresponding weights in the convex optimization used to find W (AppendixD.1). Such regularization leads to provably better generalization (Bartlett|(1998)) and is often practical to use, e.g. to encourage sparsity. But we do not use it for our PC-AE experiments in this paper..\nOur approach PC-AE is quite different from existing autoencoding work in several ways\nFirst and foremost, we posit no explicit decision rule, and avoid optimizing the highly non-convex. decision surface traversed by traditional autoencoding algorithms that learn with backpropagation. (Rumelhart et al.[(1986)). The decoding function, given the encodings, is a single layer of artificial neurons only because of the minimax structure of the problem when minimizing worst-case loss. This. differs from reasoning typically used in neural net work (see Jordan (1995)), in which the loss is the. negative log-likelihood (NLL) of the joint probability, which is assumed to follow a form specified. by logistic artificial neurons and their weights. We instead interpret the loss in the usual direct way as. the NLL of the predicted probability of the data given the visible bits, and avoid any assumptions on. the decision rule (e.g. not monotonicity in the score w, e(i), or even dependence on such a score)..\nFurthermore, efficient first-order convex optimization methods for both encoding and decoding steps. manipulate more intermediate gradient-related quantities, with facile interpretations. For details, see Appendix|A.2\nCrucially, we make no assumptions whatsoever on the form of the encoding or decoding, excep on the memory used by the decoding. Some such \"regularizing\" restriction is necessary to rule ou the autoencoder just memorizing the data, and is typically expressed by assuming a model class o compositions of artificial neuron layers. We instead impose it axiomiatically by limiting the amoun of information transmitted through B, which does not scale in n; but we do not restrict how thi information is used. This confers a clear theoretical advantage, allowing us to attain the stronges robust loss guarantee among all possible autoencoders that use the correlations B\nMore importantly in practice, avoiding an explicit model class means that we do not have to optimize. the typically non-convex model, which has long been a central issue for backpropagation-based. learning methods (e.g.Dauphin et al. (2014)). Prior work related in spirit has attempted to avoid. this through convex relaxations, including for multi-layer optimization under various structural assumptions (Aslan et al.(2014);Zhang et al.(2016), and when the number of hidden units is varied by the algorithm (Bengio et al.(2005); Bach[(2014)).\nOur approach also isolates the benefit of higher n in dealing with overfitting, as the pairwise. correlations B can be measured progressively more accurately as n increases. In this respect, we. follow a line of research using such pairwise correlations to model arbitary higher-order structure. among visible units, rooted in early work on (restricted) Boltzmann Machines (Ackley et al.(1985):. Smolensky[(1986); Rumelhart & McClelland(1987);Freund & Haussler(1992)). More recently theoretical algorithms have been developed with the perspective of learning from the correlations. between units in a network, under various assumptions on the activation function, architecture, and weights, for both deep (Arora et al.[(2014)) and shallow networks (using tensor decompositions. e.g.Livni et al.(2014); Janzamin et al.(2015)). Our use of ensemble aggregation techniques (from Balsubramani & Freund (2015af 2016)) to study these problems is anticipated in spirit by prior work. as well, as discussed at length by Bengio(2009) in the context of distributed representations..\nWe have established that a single layer of logistic artificial neurons is an optimal decoder, given. only indirect information about the data through pairwise correlations. This is not a claim that. autoencoders need only a single-layer architecture in the worst case. Sec.3.1establishes that the best. representations E are the solution to a convex optimization, with no artificial neurons involved in. computing them from the data. Unlike the decoding function, the optimal encoding function ENC. cannot be written explicitly in terms of artificial neurons, and is incomparable to existing architectures. (though it is analogous to PCA in prescribing an efficient operation that yields the encodings from. unlabeled data). Also, the encodings are only optimal given the pairwise correlations; training algorithms like backpropagation, which communicate other knowledge of the data through derivative. composition, can learn final decoding layers that outperform ours, as we see in experiments..\nIn our framework so far, we explore using all the pairwise correlations between hidden and visible bits to inform learning by constraining the adversary, resulting in a Lagrange parameter - a weight for each constraint. These V H weights W constitute the parameters of the optimal decoding layer describing a fully connected architecture. If just a select few of these correlations were used, only they would constrain the adversary in the minimax problem of Sec.2l so weights would only be introduced for them, giving rise to sparser architectures.\nOur central choices - to store only pairwise correlations and minimize worst-case reconstruction loss - play a similar regularizing role to explicit model assumptions, and other autoencoding methods. may achieve better performance on data for which these choices are too conservative, by e.g. making. distributional assumptions on the data. From our perspective, other architectures with more layers - particularly highly successful ones like convolutional, recurrent, residual, and ladder networks. (LeCun et al.(2015); He et al.(2015); Rasmus et al.(2015)) lend the autoencoding algorithm more power by allowing it to measure more nuanced correlations using more parameters, which decreases. the worst-case loss. Applying our approach with these would be interesting future work..\nExtending this paper's convenient minimax characterization to deep representations with empirical. success is a very interesting open problem. Prior work on stacking autoencoders/RBMs (Vincent et al.\n(2010)) and our learning algorithm PC-AE suggest that we could train a deep network in alternating forward and backward passes. Using this paper's ideas, the forward pass would learn the weights tc each layer given the previous layer's activations (and inter-layer pairwise correlations) by minimizing the slack function, with the backward pass learning the activations for each layer given the weights to / activations of the next layer by convex optimization (as we learn E). Both passes would consist of successive convex optimizations dictated by our approach, quite distinct from backpropagation though loosely resembling the wake-sleep algorithm (Hinton et al.(1995)).\nParticularly recently, autoencoders have been of interest largely for their many applications beyonc. compression, especially for their generative uses. The most directly relevant to us involve repurposing denoising autoencoders (Bengio et al.(2013b); see Sec.5.2); moment matching among hidden anc. visible units (Li et al.(2015); and generative adversarial network ideas (Goodfellow et al.(2014 Makhzani et al.(2015)), the latter particularly since the techniques of this paper have been applied tc binary classification (Balsubramani & Freund(2015a b)). These are outside this paper's scope, but. suggest themselves as future extensions of our approach.."}, {"section_index": "6", "section_name": "5.1 OTHER RECONSTRUCTION LOSSES", "section_text": "It may make sense to use another reconstruction loss other than cross-entropy, for instance the. expected Hamming distance between x(i) and x(i). It turns out that the minimax manipulations we use work under very broad conditions, for nearly any loss that additively decomposes over the V bit ar monotonically decreasing and increasing respectively (recall that for cross-entropy loss, this is true a. l+ (x$)) = ln J); they need not even be convex. This monotonicity is a natural condition. 1x(i) because the loss measures the discrepancy to the true label, and holds for all losses in common use\nOur framework can be easily applied to learn a denoising autoencoder (DAE;Vincent et al.(2008 2010), which uses noise-corrupted data (call it X) for training, and uncorrupted data for evaluation From our perspective, this corresponds to leaving the learning of W unchanged, but using corrupted data when learning E. Consequently, the minimization problem over encodings must be changed to account for the bias on B introduced by the noise; so the algorithm plays given the noisy data, but to minimize loss against X. This is easiest to see for zero-mean noise, for which our algorithms are completely unchanged because B does not change (in expectation) after the noise is added.\nChanging the partial losses only changes the structure of the minimax solution in two respects: by. altering the form of the transfer function on the decoding neurons, and the univariate potential well . optimized to learn the decoding weights. Otherwise, the problem remains convex and the algorithm is identical. Formal statements of these general results are in Appendix[E.\nAnother common scenario illustrating this technique is to mask a p fraction of the input bits uniformly at random (in our notation, changing 1s to -1s). This masking noise changes each pairwise correlation XU)eh by subtracting this factor du,h. This Ou,h can be estimated (w.h.p.) given xu, eh, P, xy. But even with just the noisy data and not x,, we can estimate du.h w.h.p. by extrapolating the correlation of the bits of x, that are left as +1 (a 1 p fraction) with the corresponding values in en (see AppendixC).\nTable 1: Cross-entropy reconstruction losses for PC-AE and a vanilla single-layer autoencoder, with. binary and unconstrained real-valued encodings, and significant results in bold. The PC-AE results are significantly better (see Appendix[A) than the AE results.\nThe datasets we use are first normalized to [0, 1], and then binarized by sampling each pixel stochasti cally in proportion to its intensity, following prior work (Salakhutdinov & Murray(2008)). Changing between binary and real-valued encodings in PC-AE requires just a line of code, to project the en- codings into [1, 1]H after convex optimization updates to compute ENc(). We use Adagrad (Duchi et al. (2011)) for the convex minimizations of our algorithms; we observed that their performance is not very sensitive to the choice of optimization method, explained by our approach's convexity.\nWe compare to a basic AE with a single hidden layer, trained using the Adam method with default parameters (Kingma & Ba (2014)). Other models like variational autoencoders (Kingma & Welling (2013)) are not shown here because they do not aim to optimize reconstruction loss or are not comparably general autoencoding architectures. We also use a sign-thresholded PCA baseline (essentially a completely linear autoencoder, but with the output layer thresholded to be in [-1, 1]) see Appendix|A|for more details. We vary the number of hidden units H for all algorithms, and try both binary and unconstrained real-valued encodings where appropriate; the respective AE uses logistic sigmoid and ReLU transfer functions for the encoding neurons. The results are in Table|1\nThe reconstruction performance of PC-AE indicates that it can encode information very well using. pairwise correlations, compared to the directly learned AE and PCA approaches. Loss can become. extremely low when H is raised, giving B the capacity to robustly encode almost all the information in the input bits X. The performance is roughly equal between binary hidden units and unconstrained. ones, which is expected by our derivations..\nWe also try learning just the decoding layer of Sec.2 on the encoded representation of the AE. This. is motivated by the fact that Sec. 2|establishes our decoding method to be worst-case optimal given. any E and B. We find the results to be significantly worse than the AE alone in all datasets used (e.g. reconstruction loss of ~ 171/133 on MNIST, and ~ 211/134 on Omniglot, with 32/100 hidden units respectively). This reflects the AE's training backpropagating information about the data beyond. oairwise correlations, through non-convex function compositions - however, this comes at the cos of being more difficult to optimize. The representations learned by the ENc function of PC-AE are. quite different and capture much more of the pairwise correlation information, which is used by the. decoding layer in a worst-case optimal fashion. We attempt to visually depict the differences between. the representations in Fig.3\nAs discussed in Sec. 4] we do not claim that this paper's method will always achieve the best empirica reconstruction loss, even among single-layer autoencoders. We would like to make the encoding\n2TensorFlow code available at https: / /github. com/aikanor/pc-autoencoder\nPC-AE (bin.) PC-AE (real) AE (bin.) AE (real) PCA MNIST. H = 32 51.9 53.8 65.2 64.3 86.6 9.2 9.9 26.8 25.0 52.7 MNIST. H = 100 Omniglot, H = 32 76.1 77.2 93.1 90.6 102.8 Omniglot, H = 100 12.1 13.2 46.6 45.4 63.6 Caltech-101, H = 32 54.5 54.9 97.5 87.6 118.7 Caltech-101, H = 100 7.1 7.1 64.3 45.4 75.2 notMNIST, H = 32 121.9 122.4 149.6 141.8 174.0 notMNIST, H = 100 62.2 63.0 99.6 92.1 115.5 Adult, H = 10 7.7 7.8 9.3 8.1 13.5 Adult, H = 20 0.65 0.64 2.5 1.5 7.9\nIn this section we compare our approachlempirically to a standard autoencoder with one hidden layer (termed AE here) trained with backpropagation, and a thresholded PCA baseline. Our goal is simply to verify that our approach, though very different, is competitive in reconstruction performance.\nFigure 1: Top row: randomly chosen test images from Caltech-101 silhouettes. Middle and bottor rows: corresponding reconstructions of PC-AE and AE with H = 32 binary hidden units..\nFigure 2: As Fig.2] with H = 100 on Omniglot. Difference in quality is particularly noticeable in the 1st, 5th, 8th, and 11th columns."}, {"section_index": "7", "section_name": "ACKNOWLEDGMENTS", "section_text": "I am grateful to Jack Berkowitz, Sanjoy Dasgupta, and Yoav Freund for helpful discussions; Daniel. Hsu and Akshay Krishnamurthy for instructive examples; and Gary Cottrell for an enjoyable chat. I acknowledge funding from the NIH (grant R01ES02500902)."}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "David H Ackley, Geoffrey E Hinton, and Terrence J Sejnowski. A learning algorithm for boltzmanr machines. Cognitive science, 9(1):147-169, 1985\nFrancis Bach. Breaking the curse of dimensionality with convex neural networks. arXiv preprini arXiv:1412.8690, 2014.\nPierre Baldi and Kurt Hornik. Neural networks and principal component analysis: Learning from examples without local minima. Neural networks, 2(1):53-58, 1989\nAkshay Balsubramani and Yoav Freund. Scalable semi-supervised classifier aggregation. In Advances in Neural Information Processing Systems (NIPS), 2015b.\nfunction quicker to compute, as well. But we believe this paper's results, especially when H is high. illustrate the potential of using pairwise correlations for autoencoding as in our approach, learning to. encode with alternating convex minimization and extremely strong worst-case robustness guarantees\nFigure 3: Top three rows: the reconstructions of random test images from MNIST (H = 12), as in Fig.2l PC-AE achieves loss 105.1 here, and AE 111.2. Fourth and fifth rows: visualizations of all the hidden units of PC-AE and AE, respectively. It is not possible to visualize the PC-AE encoding units by the image that maximally activates them, as commonly done, because of the form of the Enc function which depends on W and lacks explicit encoding weights. So each hidden unit h is depicted by the visible decoding of the encoded representation which has bit h \"on\" and all other bits \"off.\" (If this were PCA with a linear decoding layer, this would simply represent hidden unit h by its corresponding principal component vector, the decoding of the hth canonical basis vector in RH\nPeter L Bartlett. The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network. IEEE Transactions on Information Theory. 44(2):525-536, 1998.\nHerve Bourlard and Yves Kamp. Auto-association by multilayer perceptrons and singular value decomposition. Biological cybernetics, 59(4-5):291-294, 1988.\nNicolo Cesa-Bianchi and Gabor Lugosi. Prediction, Learning, and Games. Cambridge University Press, New York, NY, USA, 2006.\nYann N Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In Advances in neural information processing systems (NIPS). pp. 2933-2941. 2014\nAkshay Balsubramani, Sanjoy Dasgupta, and Yoav Freund. The fast convergence of incremental pca In Advances in Neural Information Processing Systems (NIPs). pp. 3174-3182. 2013\nYuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. Interna- tional Conference on Learning Representations (ICLR), 2016. arXiv preprint arXiv:1509.00519.\nIan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair. Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems (NIPS), pp. 2672-2680, 2014.\nGeoffrey E Hinton, Peter Dayan, Brendan J Frey, and Radford M Neal. The\" wake-sleep\" algorithn for unsupervised neural networks. Science, 268(5214):1158-1161, 1995.\nMichael I Jordan. Why the logistic function? a tutorial discussion on probabilities and neural networks, 1995\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprini arXiv:1412.6980, 2014.\nRoi Livni, Shai Shalev-Shwartz, and Ohad Shamir. On the computational efficiency of training neural. networks. In Advances in Neural Information Processing Systems (NIPs). pp. 855-863. 2014\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015\nMajid Janzamin, Hanie Sedghi, and Anima Anandkumar. Beating the perils of non-convexity Guaranteed training of neural networks using tensor methods. arXiv preprint arXiv:1506.08473.. 2015.\nAntti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi-supervised learning with ladder networks. In Advances in Neural Information Processing Systems, pp. 3546- 3554, 2015.\nDavid E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by back-propagating errors. Nature, 323(6088):533-536, 1986\nPascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. The Journal of Machine Learning Research, 11:3371-3408, 2010."}, {"section_index": "9", "section_name": "EXPERIMENTAL DETAILS", "section_text": "In addition to MNIST, we use the preprocessed version of the Omniglot dataset found in|Burda et al. (2016), split 1 of the Caltech-101 Silhouettes dataset, the small notMNIST dataset, and the UCI Adult (a1a) dataset. The results reported are the mean of 10 Monte Carlo runs, and the PC-AE significance results use 95% Monte Carlo confidence intervals. Only notMNIST comes without a predefined split so the displayed results use 10-fold cross-validation. Non-binarized versions of all datasets (grayscale pixels) resulted in nearly identical PC-AE performance (not shown); this is as expected from its derivation using expected pairwise correlations, which with high probability are nearly invariant under binarization (by e.g. Hoeffding bounds).\nWe used minibatches of size 250. All standard autoencoders use the 'Xavier' initialization and trained for 500 epochs or using early stopping on the test set. The \"PCA\" baseline was run on exactly. the same input data as the others; it finds decodings by mean-centering this input, finding the top. H principal components with standard PCA, reconstructing the mean-centered input with these. components, adding back the means, and finally thresholding the result to [--1, 1]\nWe did not evaluate against other types of autoencoders which regularize (Kingma & Welling. [2013)) or are otherwise not trained for direct reconstruction loss minimization. Also, not shown is the performance of a standard convolutional autoencoder (32-bit representation, depth-3 64-64-32 (en/de)coder) which performs better than the standard autoencoder, but is still outperformed by PC-AE on our image-based datasets. A deeper architecture could quite possibly achieve superior. performance, but the greater number of channels through which information is propagated makes fair. comparison with our flat fully-connected approach difficult. We consider extension of our PC-AF approach to such architectures to be fascinating future work.."}, {"section_index": "10", "section_name": "A.1 FURTHER RESULTS", "section_text": "Our bound on worst-case loss is invariably quite tight, as shown in Fig. 4] Similar results are found on all datasets. This is consistent with our conclusions about the nature of the PC-AE representations - conveying almost exactly the information available in pairwise correlations.\n+++++++++++++++\nFigure 4: Actual reconstruction loss to real data (red) and slack function [objective function] value (dotted green), during Adagrad optimization to learn W using the optimal E, B. Monotonicity is expected since this is a convex optimization. The objective function value theoretically upper-bounds the actual loss, and practically tracks it nearly perfectly.\nA 2D visualization of MNIST is in Fig. 6] showing that even with just two hidden units there is enough information in pairwise correlations for PC-AE to learn a sensible embedding. We also include more pictures of our autoencoders' reconstructions, and visualizations of the hidden units when H = 100 in Fig.5\nFigure 5: Visualizations of all the hidden units of PC-AE (left) and AE (right) from Omniglot for H = 100, as in Fig.3\n:::\nFigure 6: AE (left) and PC-AE (right) visualizations of a random subset of MNIST test data, witl H = 2 real-valued hidden units, and colors corresponding to class labels (legend at left). PC-AE's. loss is ~ 189 here, and that of AE is ~ 179.\nHere we give some details that are useful for interpretation and implementation of the proposed method.\nProposition|2|defines the encoding function for any data example x as the vector that minimizes the. total feature distortion, summed over the bits in the decoding, rewritten here for convenience:\nV Enc(x(); w) := argmin eE[-1,1]H v=1\nDoing this on multiple examples at once (in memory as a minibatch) can be much faster than on each example separately. We can now compute the gradient of the objective function w.r.t. each example i E [n], writing the gradient w.r.t. example i as column i of a matrix G E RH n. G can be calculated efficiently in a number of ways, for example as follows:.\nCompute matrix of hallucinated data X := '(WE) E R' Subtract X to compute residuals R := X X E RVn Compute G = 1wTR e RHxn\nOptimization then proceeds with gradient descent using G, with the step size found using line search. Note that since the objective function is convex, the optimum E* leads to optimal residuals R* E RVxn such that G = wTR* = 0Hxn, so each column of R* is in the null space of WT which maps the residual vectors to the encoded space. We conclude that although the compression is not perfect (so the optimal residuals R* oV n in general), each column of R* is orthogonal to. the decoding weights at an equilibrium towards which the convex minimization problem of (7) is. guaranteed to stably converge."}, {"section_index": "11", "section_name": "A.2.2 DECODING", "section_text": "The decoding step finds W to ensure accurate decoding of the given encodings E with correlations B, solving the convex minimization problem:\nThis can be minimized by first-order convex optimization. The gradient of d8) at W i.\nFigure 7: As Fig.2 with H = 100 on Caltech-101 silhouettes. 8 ddd 8 Figure 8: As Fig.2] with H = 100 on MNIST. A.2.1 ENCODING\nV 1 n W* = argmin (w,e(i) W n WERVxH v=1 i=1\n- B + [I'(WE)]E]\nThe second term can be understood as \"hallucinated\" pairwise correlations B, between bits of the encoded examples E and bits of their decodings under the current weights, X := '(WE). The hallucinated correlations can be written as B := XET. Therefore, (9) can be interpreted as the. residual correlations B - B. Since the slack function of (8) is convex, the optimum W* leads to. hallucinated correlations B* = B, which is the limit reached by the optimization algorithm after many iterations.\nIn this paper, we represent the bit-vector data in a randomized way in [-1, 1]V . Randomizing the data only relaxes the constraints on the adversary in the game we play; so at worst we are working with al upper bound on worst-case loss, instead of the exact minimax loss itself, erring on the conservative side. Here we briefly justify the bound as being essentially tight, which we also see empirically ir this paper's experiments.\nIn the formulation of Section2] the only information we have about the data is its pairwise correlations. with the encoding units. When the data are abundant (n large), then w.h.p. these correlations are close to their expected values over the data's internal randomization, so representing them as continuous. values w.h.p. results in the same B and therefore the same solutions for E, W. We are effectively. allowing the adversary to play each bit's conditional probability of firing, rather than the binary. realization of that probability.\nThis allows us to apply minimax theory and duality to considerably simplify the problem to a convex optimization, when it would otherwise be nonconvex, and computationally hard (Baldi (2012). Th fact that we are only using information about the data through its expected pairwise correlations witl the hidden units makes this possible.\nThe above also applies to the encodings and their internal randomization, allowing us to learn binary randomized encodings by projecting to the convex set [1, 1|H\nn 1 n i=1\nHere, we express this in terms of the known quantities xu, en, P, and not the unknown denoised data XU.\n2 i U U\nFigure 9: As Fig.. 2 with H = 32 on notMNIST\n2 x 11 A AY U h U U U h\nn n (i) =+1^x C =+1xi)=+\n=+1^x(i) 2 n n 1 Ax n 1 n 1\nwhere (a) uses the minimax theorem (Cesa-Bianchi & Lugosi((2006)), which can be applied as i linear programming, because the objective function is linear in x(i) and w,. Note that the weight are introduced merely as Lagrange parameters for the pairwise correlation constraints, not as mode assumptions.\n- 0. Therefore the first term above is zero LU nd the expression can be simplified: :(i) =+1^x(i) (10 X Now on any example i, independent of the value of e), a p fraction of the bits where x) .(i) +1 are lipped to get (). Therefore, n = +1 A =+1^x(i) utting it all together, +1^ ii 2 2 i L1 A x PROOFS 1+x(i) roof of Theorem1] Writing T(x)) := l_(x(i) for convenience, we x(i) an simplify L*, using the definition of the loss (2), and Lagrange duality for all V H constraint ivolving B. his leads to the following chain of equalities, where for brevity the constraint sets are sometime mitted when clear, and we write X as shorthand for the data x(1) , x(n) and X analogously fo ne reconstructions.\nThis leads to the following chain of equalities, where for brevity the constraint sets are sometimes omitted when clear, and we write X as shorthand for the data x(1), ..., x(n) and X analogously for the reconstructions.\nmin max x(1) ,x(n) (1) .,x(n)E[-1,1]V, 7 E[-1,1]V VvE[V]: 1Ex,=b, 1 min max min 2 x x WERVxH V (a) 1 1 l+x+lx))-x min Wu min max n x x V=1 V n V 1 1 min l+(x (i) W min max W 2 w1,...,wy n x x(i)E[-1,1]V U=] i=1 (11)\n1 + x min max (1) n 1 ,x(n)E[-1,1]V E[-1,1]V VvE[V]:Ex,=by n V 1 (e+(x&)+l(x8)-x@T(x) - min max min 2 x x WERVxH n i=1 v=1 V (a) 1 +x(i))+l_(x(i)) min 0 W - min max n x x v=1 V n V 1 1 l+(x($)+l_(x(i)+ min Wy min max 0 W. 2 w1,.,wv n x x(i)E[-1,1]V =1 v\n1 l+(x{$)+l_(x($)+|wT Te(i) _ min min 2 W1,...,Wy n x(i)E[-1,1]V (x() +wTe(i)_T( min W min Wy ERH n X\nThe absolute value breaks down into two cases, so the inner minimization's objective can be simplified\nfalls in the second case of (12), where w. e(i)\nD.1 Lo CORRELATION CONSTRAINTS AND L1 WEIGHT REGULARIZATION\nHere we formalize the discussion of Sec. 3.4|with the following result\nn min max x(1). ..,x(n)E[-1,1]V x(1) ,x(n)E[-1,1]V VvE[V]:Ex,-bu] LE (wTe(i)+ev|wu|| min W,ERH\nFor each v, i, the minimizing x%) is a logistic function of the encoding e(i) with weights equal to the minimizing w* above, exactly as in Theorem|1\nProof. The proof adapts the proof of Theorem 1] following the result on L1 regularization in Balsubramani & Freund[(2016) in a very straightforward way; we describe this here\nWe break each Loo constraint into two one-sided constraints for each v, i.e. Ex, - b, < ey1n and Exy - by -e,1n. These respectively give rise to two sets of Lagrange parameters Au, Sv 0H for each v, replacing the unconstrained Lagrange parameters wy E RH\nThe conditions for the minimax theorem apply here just as in the proof of Theorem[1] so that (11) i replaced by\nSuppose for some h E [H] that &v,h > 0 and Av,h > 0. Then subtracting min(v,h, Av,h) from both does not affect the value [$v - Ay]n, but always decreases [$v + Xy]n, and therefore always decreases the objective function. Therefore, we can w.l.o.g. assume that Vh E [H] : min(Su,h, Au,n) = 0 Defining w, = &u Ay (so that &u,h = [wu,h]+ and Au,h = [wu,h]_ for all h), we see that the term ey1T(Sv + Xy) in (13) can be replaced by ev ||wv||1\nProceeding as in the proof of Theorem[1[gives the result\ne if w,e(i)>T(x(i) W i if wJe(i)<T(x(i) W.\nPutting the cases together, we have shown the form of the summand I. We have also shown the completes the proof.\n1 min `(b,(Sv-Xy)-ey1'(Sv+X) (13 $1,...,Ev U=1 1 l+(x(j))+l_(x())+max x(i) (Su-)ei_T(x min n xi) =1 v\nSince the data X are still randomized binary, we first broaden the definition of (2), rewritten here\nV 1+xv i) := L 2 2 U=\nAssumption 1. Over the interval (-1, 1), l+() is decreasing and l_() is increasing, and both are twice differentiable\nAssumption|1|is a very natural one and includes many non-convex losses (seeBalsubramani & Freund (2016) for a more detailed discussion, much of which applies bitwise here). This and the additive decomposability of (15) over the V bits are the only assumptions we make on the reconstruction loss l(x(i), x(i)). The latter decomposability assumption is often natural when the loss is a log-likelihood, where it is tantamount to conditional independence of the visible bits given the hidden ones.\n-m+2l_(-1) if m<T(-1) (m) := l+(F-1(m))+l_(F-1(m) if me(T(-1),T(1)) < m+ 2l+(1) if mF(1)\nThen we may state the following result, describing the optimal decoding function for a genera reconstruction loss.\nTheorem 4. Define the potential function\nn min max x(1),.,x(n)E[-1,1]V x(1),...,x(n)E[-1,1]V, n i=1 VvE[V]: 1Ex,=bu V n 1 min 2 WERH n U=1 i=1\nif w*Te(i)<T(-1) w*e(i) e (F(-1),(1) if w*Te(i) T(1) if\nThe proof is nearly identical to that of the main theorem of Balsubramani & Freund|(2016). That. proof is essentially recapitulated here for each bit v E [V] due to the additive decomposability of the. loss, through algebraic manipulations (and one application of the minimax theorem) identical to the proof of Theorem[1] but using the more general specifications of I and I in this section. So we do not rewrite it in full here.\nWe do this by redefining the partial losses l(x$?), to any functions satisfying the following mono tonicity conditions\nGiven such a reconstruction loss, define the increasing function T(y) := l_ (y)-l+(y) : [-1, 1] +> R for which there exists an increasing (pseudo)inverse T-1. Using this we broaden the definition of the potential function I in terms of l+:\nFor each v E [V], i E [n], the minimizing x(i) ) is a sigmoid function of the encoding e(i) with weights equal to the minimizing w* above, as in Theorem[1 The sigmoid is defined as\nWe made some technical choices in the derivation of PC-AE, which prompt possible alternative not explored here for a variety of reasons. Recounting these choices gives more insight into ou framework.\nThe output reconstructions could have restricted pairwise correlations, i.e. XET = B. One option is to impose such restrictions instead of the existing constraints on X, leaving X unrestricted However, this is not in the spirit of this paper, because B is our means of indirectly conveying information to the decoder about how X is decoded.\nAnother option is to restrict both X and X. This is possible and may be useful in propagating correlation information between layers of deeper architectures while learning, but its minimax solution does not have the conveniently clean structure of the PC-AE derivation.\nIn a similar vein, we could restrict E during the encoding phase, using B and X. As B is changed only during this phase to better conform to the true data X, this tactic fixes B during the optimization, which is not in the spirit of this paper's approach. It also performed significantly worse in our experiments."}] |
H1W1UN9gg | [{"section_index": "0", "section_name": "DEEP INFORMATION PROPAGATION", "section_text": "Samuel S. Schoenholz Google Brain\nSamuel S. Schoenholz\nWe study the behavior of untrained neural networks whose weights and biases are randomly distributed using mean field theory. We show the existence of depth scales that naturally limit the maximum depth of signal propagation through these random networks. Our main practical result is to show that random networks may be trained precisely when information can travel through them. Thus, the depth scales that we identify provide bounds on how deep a network may be trained for a specific choice of hyperparameters. As a corollary to this, we argue that in networks at the edge of chaos, one of these depth scales diverges. Thus arbitrarily deep networks may be trained only sufficiently close to criticality. We show that the presence of dropout destroys the order-to-chaos critical point and therefore strongly limits the maximum trainable depth for random networks. Finally, we develop a mean field theory for backpropagation and we show that the ordered and chaotic phases correspond to regions of vanishing and exploding gradient respectively."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Deep neural network architectures have become ubiquitous in machine learning. The success o deep networks is due to the fact that they are highly expressive (Montufar et al.] 2014) while si multaneously being relatively easy to optimize (Choromanska et al.J2015f|Goodfellow et al.2014 with strong generalization properties (Recht et al.]2015). Consequently, developments in machin learning often accompany improvements in our ability to train increasingly deep networks. Despit this, designing novel network architectures is frequently equal parts art and science. This is, in par because a general theory for neural networks that might inform design decisions has lagged behin the feverish pace of design.\nIn this paper, we demonstrate the existence of several characteristic \"depth\"' scales that emerge. naturally and control signal propagation in these random networks. We then show that one of these depth scales, &c, diverges at the boundary between order and chaos. This result is insensitive tc. many architectural decisions (such as choice of activation function) and will generically be true a any order-to-chaos transition. We then extend these results to include dropout and we show tha even small amounts of dropout destroys the order-to-chaos critical point and consequently remove. the divergence in &c. Together these results bound the depth to which signal may propagate througl. random neural networks.\nWe then develop a corresponding mean field model for gradients and we show that a duality exists. between the forward propagation of signals and the backpropagation of gradients. The ordered and chaotic phases that|Poole et al.(2016) identified correspond to regions of vanishing and exploding. gradients, respectively. We demonstrate the validity of this mean field theory by computing gradients. of random networks on MNIST. This provides a formal explanation of the 'vanishing gradients'.\nSurya Ganguli\nStanford University"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "A pair of recent papers (Poole et al. 2016] Raghu et al.l2016) demonstrated that random neural networks are exponentially expressive in their depth. Central to their approach was the consideration of networks after random initialization. whose weights and biases were i.i.d. Gaussian distributed. In particular the paper byPoole et al.[(2016) developed a \"mean field' formalism for treating wide, untrained, neural networks. They showed that these mean field networks exhibit an order-to-chaos transition as a function of the weight and bias variances. Notably the mean field formalism is not closely tied to a specific choice of activation function or loss.\nphenomenon that has long been observed in neural networks (Bengio et al.] 1993). We continue. to show that the covariance between two gradients is controlled by the same depth scale that limits correlated signal propagation in the forward direction.\nFinally, we hypothesize that a necessary condition for a random neural network to be trainable is tha. information should be able to pass through it. Thus, the depth-scales identified here bound the set o. hyperparameters that will lead to successful training. To test this ansatz we train ensembles of deep. fully connected, feed-forward neural networks of varying depth on MNIST and CIFAR10, with anc. without dropout. Our results confirm that neural networks are trainable precisely when their deptl. is not much larger than &c. This result is dataset independent and is, therefore, a universal functio. of network architecture."}, {"section_index": "3", "section_name": "2 BACKGROUND", "section_text": "We begin by recapitulating the mean-field formalism developed in Poole et al.(2016). Consider a fully-connected, untrained, feed-forward, neural network of depth L with layer width N and some nonlinearity $ : R -> R. Since this is an untrained neural network we suppose that its weights and biases are respectively i.i.d. as W, ~ N(0, o?/Ni) and b, ~ N(0, o?). Notationally we set z, to be the pre-activations of the lth layer and y,+1 to be the activations of that layer. Finally, we take the input to the network to be y = x. The propagation of a signal through the network is described by the pair of equations,\nSince the weights and biases are randomly distributed, these equations define a probability distri bution on the activations and pre-activations over an ensemble of untrained neural networks. The \"mean-field\"' approximation is then to replace z, by a Gaussian whose first two moments match those of zl. For the remainder of the paper we will take the mean field approximation as given..\nE\nwhereDz = equations completely describe the evolution of a single input through a mean field neural network For any choice of o?, and o? with bounded , eq.3|has a fixed point at q* = lim,>o yaa..\nThe propagation of a pair of signals, x,a and x;b, through this network can be understood similarly.. Here the mean pre-activations are trivially the same as in the single-input case. The independence\nA corollary of these result is that asymptotically deep neural networks should be trainable pro. vided they are initialized sufficiently close to the order-to-chaos transition. The notion of \"edge of chaos'\" initialization has been explored previously. Such investigations have been both direct as in Bertschinger et al.(2005); Glorot & Bengio(2010) or indirect, through initialization schemes that favor deep signal propagation such as batch normalization (Ioffe & Szegedy2015), orthogo. nal matrix initialization (Saxe et al.2014), random walk initialization (Sussillo & Abbott!2014) composition kernels (Daniely et al.]2016), or residual network architectures (He et al.]2015). The. novelty of the work presented here is two-fold. First, our framework predicts the depth at which networks may be trained even far from the order-to-chaos transition. While a skeptic might ask when it would be profitable to initialize a network far from criticality, we respond by noting that. there are architectures (such as neural networks with dropout) where no critical point exists and so. this more general framework is needed. Second, our work provides a formal, as opposed to intuitive, explanation for why very deep networks can only be trained near the edge of chaos..\nz}=W{jy}+b} jl+1 = $(z{) Yi j\nConsider first the evolution of a single input, xi;a, as it evolves through the network (as quantified by of the pre-activations in the same layer will be..\nare Gaussian approximations to the pre-activations in the preceding layer with the correct covariance matrix. Moreover c' is the correlation between the two inputs after l lavers.\n0.25 100 100 (a) (b) (c) 10-1 101 0.20 10-2 10-2 103 103 0.15 ordered 104 104 210 - b 10-5 10-5 0.10 qo 10-6 106 0.05 chaotic 10-7 107 10-8 10-8 0.00 0.5 1.0 1.5 2.0 2.5 10-9 10-9 0 10 20 30 40 50 0 50 100 150 200 250 300 2 0 w 7 7\nFigure 1: Mean field criticality. (a) The mean field phase diagram showing the boundary betweer ordered and chaotic phases as a function of o?, and o?. (b) The residual [q* - qaa! as a function of depth on a log-scale with o? = 0.05 and o? , from 0.01 (red) to 1.7 (purple). Clear exponentia behavior is observed.(c) The residual c* ab| as a function of depth on a log-scale. Again, the exponential behavior is clear. The same color scheme is used here as in (b).\ndcl X1 z ab"}, {"section_index": "4", "section_name": "3 ASYMPTOTIC EXPANSIONS AND DEPTH SCALES", "section_text": "Our first contribution is to demonstrate the existence of two depth-scales that arise naturally withii. the framework of mean field neural networks. Motivating the existence of these depth-scales, w. iterate eq.3|and4Juntil convergence for many values of o?, between 0.1 and 3.0 and with o? = 0.05 which both qaa approaches q* and cab approaches c* is exponential over many orders of magnitude. ciently large l. Here, &g and &c define depth-scales over which information may propagate about the. magnitude of a single input and the correlation between two inputs respectively..\nWe will presently prove that qaa and cab are asymptotically exponential. In both cases we will use. the same fundamental strategy wherein we expand one of the recurrence relations (either eq.3|or eq.4) about its fixed point to get an approximate \"asymptotic\"' recurrence relation. We find that this asymptotic recurrence relation in turn implies exponential decay towards the fixed point over a depth-scale, Sx.\nWe first analyze eq. 3|and identify a depth-scale at which information about a single input may propagate. Let qaa = q* + e'. By construction so long as limt->oo qaa = q* exists it follows that.\nof the weights and biases implies that the covariance between different pre-activations in the same layer will be given by, E[zh;aj;t] = qabdij. The covariance, qab, will be given by the recurrence relation,\nDz1Dz2$(u1)$(u2) + oj\nExamining eq.4|it is clear that c* = 1 is a fixed point of the recurrence relation. To determine whether or not the c* = 1 is an attractive fixed point the quantity,.\nis introduced. Poole et al.(2016) note that the c* = 1 fixed point is stable if x1 < 1 and is unstable otherwise. Thus, X1 = 1 represents a critical line separating an ordered phase (in which c* = 1 and all inputs end up asymptotically correlated) and a chaotic phase (in which c* < 1 and all inputs end up asymptotically decorrelated). For the case of $ = tanh, the phase diagram in fig.[1|(a) is observed.\nThis establishes &a as a depth scale that controls how deep information from a single input may penetrate into a random neural network.\nDz1Dz2$'(u*)$'(u*) +0\n1.0 4.0 100 (a) (b 3.5 C 0.8 80 3.0 2.5 0.6 60 q0 2.0 0.4 1.5 40 2 = 0.5 1.0 0.2 = 1.8 20 0.5 0w 3.5 - 0.0 0.0 0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 1 2 0 w 2 ab W\nFigure 2: Depth scales. (a) The iterative correlation map showing c'+1 as a function of c', for three. different values of o?.. Green inset lines show the linearization of the iterative map about the critical point, e-1/c. The three curves show networks far in the ordered regime (red), at the edge of chaos. (purple), and deep in the chaotic regime (blue). (b) The depth scale for information propagated in a. single input, &g as a function of o?, for o? = 0.01 (black) to o? = 0.3 (green). Dashed lines show. theoretical predictions while solid lines show measurements. (c) The depth scale for correlations. between inputs, &c for the same values of o?. Again dashed lines are the theoretical predictions. while solid lines show measurements. Here a clear divergence is observed at the order-to-chaos. transition.\nthere is only a single fixed point, cab = 1. In the chaotic regime we see that a second fixed point develops and the cab = 1 point becomes unstable. We see that the linearization about the fixed points becomes significantly closer to the trivial map near the order-to-chaos transition.\nTo test these claims we measure q and c directly by iterating the recurrence relations for qha and = 0.8 and c%h = 0.6. In this case we consider values of o?, between\nlog Dzo\" X1 A\nDz1Dz2$'(u) 0g )$'(u*\nIn the ordered phase c* = 1 and so &-1 = - log X1. Since the transition between order and chaos occurs when X1 = 1 it follows that &c diverges at any order-to-chaos transition so long as q* and c eXist.\n0.1 and 3.0 and o? between 0.01 and 0.3. For each hyperparameter settings we fit the resulting residuals, |daa - q*| and |cab c*|, to exponential functions and infer the depth-scale. We then compare this measured depth-scale to that predicted by the asymptotic expansion. The result of this measurement is shown in fig.2 In general we see that the agreement is quite good. As expected we see that Ec diverges at the critical point.\nAs observed in|Poole et al.[(2016) we see that the depth scale for the propagation of information in a single input, &g, is consistently finite and significantly shorter than &c. To understand why this is the. case consider eq.6|and note that for tanh nonlinearities the second term is always negative. Thus.. even as X1 approaches 1 we expect X1 + ?, J Dz\"(/q*z)$(q*z) to be substantially smaller than 1."}, {"section_index": "5", "section_name": "3.1 DROPOUT", "section_text": "The mean field formalism can be extended to include dropout. The main contribution here will. be to argue that even infinitesimal amounts of dropout destroys the mean field critical point, and therefore limits the trainable network depth. In the presence of dropout the propagation equation. eq.1 becomes,\nwhere p; ~ Bernoulli(p) and p is the dropout rate. As is typically the case we have re-scaled the sum by p-- so that the mean of the pre-activation is invariant with respect to our choice of dropout rate.\nFollowing a similar procedure to the original mean field calculation consider the fate of two inputs.. x.. and x%. ci;6, as they are propagated through such a random network. We take the dropout masks to. Li; a be chosen independently for the two inputs mimicking the manner in which dropout is employed in practice. With dropout the diagonal term in the covariance matrix will be (see Appendix7.3),.\n.\nThe variance of a single input with dropout will therefore propagate in an identical fashion to the vanilla case with a re-scaling o?, -> o?/p. Intuitively, this result implies that, for the case of a single input, the presence of dropout simply increases the effective variance of the weights\nComputing the off-diagonal term of the covariance matrix similarly (see Appendix|7.4\nwith u1, u2, and c', defined by analogy to the mean field equations without dropout. Here, unlike. in the case of a single input, the recurrence relation is identical to the recurrence relation without dropout. To see that c* = 1 is no longer a fixed point of these dynamics consider what happens. to eq.12|when we input c' = 1. For simplicity, we leverage the short range of &q to replace. = q6b = q*. We find (see Appendix7.5),\nMost significantly, we see that the cab = 1 is no longer a fixed point of the dynamics. Instead, as the dropout rate increases c'ab gets mapped to decreasing values and the fixed point monotonically. decreases.\n1 ) W{P}y+b\nDz1Dz2$(u1)$(u2) + oj\nl+1 Dz0 ab 0\nThe second term is positive for any p < 1. This implies that if chb = 1 for any l then c'+1 ab Thus, c* = 1 is not a fixed point of eq.12|for any p < 1. Since eq.12|is identical in form to eq.4|it follows that the depth scale for signal propagation with dropout will likewise be given by eq.9|with the substitutions q* -> q* and c* -> c* computed using eq.11and eq.12|respectively. Importantly, since there is no longer a sharp critical point with dropout we do not expect a diverging depth scale\n1.0 102 (a) (b) (c) 1.0 0.8 0.8 0.6 0.6 + 101 0.4 0.4 p =1.0 0.2 p = 0.95 0.2 p = 0.9 0.0 0.0 100 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 0 w 2 0 w 2\nFigure 3: Dropout destroys the critical point, and limits the depth to which information can propagate. different values of the dropout rate p for networks tuned close to their critical point. Green inset. lines show the linearization of the iterative map about the critical point, e-1/c. (b) The asymptotic. value of the correlation map, c*, as a function of o?, for different values of dropout from p = 1. (black) to p = 0.8 (blue). We see that for all values of dropout except for p = 1, c* does not show a sharp transition between an ordered phase and a chaotic phase. (c) The correlation depth scale &c as a function of o?, for the same values of dropout as in (b). We see here that for all values of p excep1. for p = 1 there is no divergence in c.\nTo test these results we plot in fig. 3|(b) the asymptotic correlation, c*, as a function of o?, for different values of dropout from p = 0.8 to p = 1.0.As expected, we see that for all p < 1 there is no sharp transition between c* = 1 and c* < 1. Moreover as the dropout rate increases the correlation c* monotonically decreases. Intuitively this makes sense. Identical inputs passed through two different dropout masks will become increasingly dissimilar as the dropout rate increases. In fig.3|(c) we show the depth scale, &c, as a function of o?, for the same range of dropout probabilities. We find that, as predicted, the depth of signal propagation with dropout is drastically reduced and. importantly, there is no longer a divergence in &c. Increasing the dropout rate continues to decrease\nNonetheless, we can work out a recurrence relation for the variance of the error, qaa = E[(o)?]. leveraging the Gaussian ansatz on the pre-activations. In order to do this, however, we must firsi make an additional approximation that the weights used during forward propagation are drawn in-. dependently from the weights used in backpropagation. This approximation is similar in spirit to the. vanilla mean field approximation and is reminiscent of work on feedback alignment (Lillicrap et al.. 2014). With this in mind we arrive at the recurrence (see appendix7.7),\nThe presence of X1 in the above equation should perhaps not be surprising. In Poole et al.(2016. they show that X1 is intimately related to the tangent space of a given layer in mean field neural\nThere is a duality between the forward propagation of signals and the backpropagation of gradients To elucidate this connection consider the backpropagation equations given a loss E\ndE o=$'(z)8+1W}+1 Ow!. IJ\nwith the identification o, = dE/dz,. Within mean field theory, it is clear that the scale of fluctuations of the gradient of weights in a layer will be proportional to E[(,)2] (see appendix7.6). In contrast. to the pre-activations in forward propagation (eq.[1), the o, will typically not be Gaussian distributed. even in the large layer width limit..\n9aa N\nnetworks. We note that the backpropagation recurrence features an explicit dependence on the ratio of widths of adjacent layers of the network, N+1/N. Here we will consider exclusively constant width networks where this factor is unity. For a discussion of the case of unequal layer widths see Glorot & Bengio(2010)\nSince X1 depends only on the asymptotic q* it follows that for constant width networks we expect eq.15|to again have an exponential solution with,\n(L-l)/s log X1\nNote that here -1 I = log X1 both above and below the transition. It follows that & can be both. positive and negative. We conclude that there should be three distinct regimes for the gradients\nIntuitively these three regimes make sense. To see this, recall that perturbations to a weight in laye l can alternatively be viewed as perturbations to the pre-activations in the same layer. In the orderec phase both the perturbed signal and the unperturbed signal will be asymptotically mapped to the same point and the derivative will be small. In the chaotic phase the perturbed and unperturbec signals will become asymptotically decorrelated and the gradient will be large.\n1027 102 (a) (b) 1023 1019 1015 1011 22 107 103 iM 10-1 101 10-5 10-9 10-13 10-17 10-21 10-25 10-29 100 0 50 100 150 200 1.0 1.5 2.0 2.5 3.0 3.5 4.0 7 2 0w\nFigure 4: Gradient backpropagation behaves similarly to signal forward propagation. (a) The 2- norm, ||Vws, E||? as a function of layer, l, for a 240 layer random network with a cross-entropy loss on MNIST. Different values of o?. from 1.0 (blue) to 4.0 (red) are shown. Clear exponential vanishing / explosion is observed over many orders of magnitude. (b) The depth scale for gradients predicted by theory (dashed line) compared with measurements from experiment (red dots). Simi larity between theory and experiment is clear. Deviations near the critical point are primarily due to finite size effects.\nTo investigate these predictions we construct deep random networks of depth L = 240 and layer width N = 300. We then consider the cross-entropy loss of these networks on MNIST. In fig.4(a) we plot the layer-by-layer 2-norm of the gradient, ||w, E||2, as a function of layer, l, for differ ent values of o?. We see that ||w. E|3 behaves exponentially over many orders of magnitude. Moreover, we see that the gradient vanishes in the ordered phase and explodes in the chaotic phase We test the quantitative predictions of eq.16|in fig.4[(b) where we compare [$| as predicted from theory with the measured depth-scale constructed from exponential fits to the gradient data. Here we see good quantitative agreement between the theoretical predictions from mean field random net- works and experimentally realized networks. Together these results suggest that the approximations on the backpropagation equations were representative of deep, wide, random networks.\nFinally, we show that the depth scale for correlated signal propagation likewise controls the depth. at which information stored in the covariance between gradients can survive. The existence of\n1. In the ordered phase, X1 < 1 and so $- > 0. We therefore expect gradients to vanish over a depth|$v|: 2. At criticality, X1 -> 1 and so - -> 0o. Here gradients should be stable regardless of depth. 3. In the chaotic phase, X1 > 1 and so &- < 0. It follows that in this regime gradients should explode over a depth$\nconsistent gradients across similar samples from a training set ought to be especially important for determining whether or not a given neural network architecture can be trained. To establish this depth-scale first note (see Appendix 7.8) that the covariance between gradients of two different inputs, xi;1 and xi;2, will be proportional to (Vw1 Ea) : (Vw! Eb) ~ E[6!;a;b] = qab where Ea is\nqab = qab Dz1Dz2$'(u1)$'(u2 Ni+2"}, {"section_index": "6", "section_name": "EXPERIMENTAL RESULTS", "section_text": "Taken together, the results of this paper lead us to the following hypothesis: a necessary conditior. for a random network to be trained is that information about the inputs should be able to propa. gate forward through the network, and information about the gradients should be able to propagate. backwards through the network. The preceding analysis shows that networks will have this property. precisely when the network depth, L, is not much larger than the depth-scale &c. This criterion is data independent and therefore offers a \"universal\"' constraint on the hyperparameters that depends. on network architecture alone. We now explore this relationship between depth of signal propagation and network trainability empirically.\n(a) (b) 102 102 6Ec 6c 7 2Ec 101 101 1.0 1.5 2.0 2.5 3.0 3.5 4.0 1.0 1.5 2.0 2.5 3.0 3.5 4.0 2 0w 2 W (c) (d) 6Ec 68c 102 102 101 101 1.0 1.5 2.0 2.5 3.0 3.5 4.0 1.0 1.5 2.0 2.5 3.0 3.5 4.0 2 0w 2 W\nFigure 5: Mean field depth scales control trainable hyperparameters. The training accuracy for neu ral networks as a function of their depth and initial weight variance, o?, from a high accuracy (red) tc low accuracy (black). In (a) we plot the training accuracy after 200 training steps on MNIST using SGD. Here overlayed in grey dashed lines are different multiples of the depth scale for correlated signal propagation, ngc. We plot the accuracy in (b) after 2000 training steps on CIFAR10 using SGD, in (c) after 14000 training steps on MNIST using SGD, and in (d) after 300 training steps on MNIST using RMSPROP. Here we overlay in white dashed lines 6&c.\nTo investigate this prediction, we consider random networks of depth 10 L 300 and 1 o?, 4 with o? = 0.05. We train these networks using Stochastic Gradient Descent (SGD) and RMSProp\nwhere u1 and u2 are defined similarly as for the forward pass. Expanding asymptotically it is clear defined in the forward pass\non MNIST and CIFAR10. We use a learning rate of 10-3 for SGD when L 200, 10-4 for larger L, and 10-5 for RMsProp. These learning rates were selected by grid search between 10-6 and. 10-2 in exponentially spaced steps of size 10. We note that the depth dependence of learning rate. was explored in detail in Saxe et al.(2014). In fig.5](a)-(d) we color in red the training accuracy. that neural networks achieved as a function of o?, and L for different datasets, training time, and. choice of minimizer (see Appendix7.10|for more comparisons). In all cases the neural networks. over-fit the data to give a training accuracy of 100% and test accuracies of 98% on MNIST and 55% on CIFAR10. We emphasize that the purpose of this study is to demonstrate trainability as opposed to optimizing test accuracy.\nWe now make the connection between the depth scale, &c, and the maximum trainable depth more precise. Given the arguments in the preceding sections we note that if L = n&c then signal through the network will be attenuated by a factor of en. To understand how much signal can be lost while still allowing for training, we overlay in fig.5](a) curves corresponding to n&c from n = 1 to 6. We. find that networks appear to be trainable when L 6&c. It would be interesting to understand why. this is the case.\nMotivated by this argument in fig.5 (b)-(d) in white, dashed, overlay we plot twice the predicted depth scale, 6&c. There is clearly a relationship between the depth of correlated signal propagation and whether or not these networks are trainable. Networks closer to their critical point appear to train more quickly than those further away. Moreover, this relationship has no obvious dependence on dataset, duration of training, or minimizer. We therefore conclude that these bounds on trainable hyperparameters are universal. This in turn implies that to train increasingly deep networks, one must generically be ever closer to criticality.\n(a) (b) (c) 102 102 102 6Ec 101 101 101 1.0 1.5 2.0 2.5 3.0 3.5 4.0 1.0 1.5 2.0 2.5 3.0 3.5 4.0 1.0 1.5 2.0 2.5 3.0 3.5 4.0 0w 2 Ow 2 0w 2\nNext we consider the effect of dropout. As we showed earlier, even infinitesimal amounts of dropout disrupt the order-to-chaos phase transition and cause the depth scale to become finite. However, since the effect of a single dropout mask is to simply re-scale the weight variance by o?, -> o? / p, the gradient magnitude will be stable near criticality, while the input and gradient correlations will not be. This therefore offers a unique opportunity to test whether the relevant depth-scale is |1/ log X1 0r &c.\nIn fig.6 we repeat the same experimental setup as above on MNIST with dropout rates p = 0.99, 0.98, and 0.94. We observe, first and foremost, that even extremely modest amounts of dropou limit the maximum trainable depth to about L = 100. We additionally notice that the depth-scale Sc, predicts the trainable region accurately for varying amounts of dropout."}, {"section_index": "7", "section_name": "6 DISCUSSION", "section_text": "In this paper we have elucidated the existence of several depth-scales that control signal propagation. in random neural networks. Furthermore, we have shown that the degree to which a neural network can be trained depends crucially on its ability to propagate information about inputs and gradients.\nFigure 6: The effect of dropout on trainability. The same scheme as in fig.5lbut with dropout rates of (a) p = 0.99, (b) p = 0.98, and (c) p = 0.94. Even for modest amounts of dropout we see an upper bound on the maximum trainable depth for neural networks. We continue to see good agreement between the prediction of our theory and our experimental training accuracy.\nthrough its full depth. At the transition between order and chaos, information stored in the correla.. tion between inputs can propagate infinitely far through these random networks. This in turn implies. that extremely deep neural networks may be trained sufficiently close to criticality. However, our contribution goes beyond advocating for hyperparameter selection that brings random networks to. be nearly critical. Instead, we offer a general purpose framework that predicts, at the level of mean. field theory, which hyperparameters should allow a network to be trained. This is especially relevant. when analyzing schemes like dropout where there is no critical point and which therefore imply an. upper bound on trainable network depth.\nAn alternative perspective as to why information stored in the covariance between inputs is crucial for training can be understood by appealing to the correspondence between infinitely wide Bayesian. neural networks and Gaussian Processes (Neal||2012). In particular the covariance, qab, is intimately. related to the kernel of the induced Gaussian Process. It follows that cases in which signal stored in. the covariance between inputs may propagate through the network correspond precisely to situations. in which the associated Gaussian Process is well defined..\nOur work suggests that it may be fruitful to investigate pre-training schemes that attempt to perturb. the weights of a neural network to favor information flow through the network. In principle this could be accomplished through a layer-by-layer local criterion for information flow or by selecting. the mean and variance in schemes like batch normalization to maximize the covariance depth-scale.\nThese results suggest that theoretical work on random neural networks can be used to inform prac-. tical architectural decisions. However, there is still much work to be done. For instance, the frame work developed here does not apply to unbounded activations, such as rectified linear units, where. it can be shown that there are phases in which eq.3|does not have a fixed point. Additionally, the. analysis here applies directly only to fully connected feed-forward networks, and will need to be. extended to architectures with structured weight matrices such as convolutional networks..\nWe close by noting that in physics it has long been known that, through renormalization, the behavior of systems near critical points can control their behavior even far from the idealized critical case We therefore make the somewhat bold hypothesis that a broad class of neural network topologies will be controlled by the fully-connected mean field critical point."}, {"section_index": "8", "section_name": "ACKNOWLEDGMENTS", "section_text": "We thank Ben Poole, Jeffrey Pennington, Maithra Raghu, and George Dahl for useful discussions We are additionally grateful to RocketAI for introducing us to Temporally Recurrent Online Learn. ing and two-dimensional time\nAnna Choromanska, Mikael Henaff, Michael Mathieu, Gerard Ben Arous, and Yann LeCun. The loss surfaces of multilayer networks. In A1STATS. 2015.\nA. Daniely, R. Frostig, and Y. Singer. Toward Deeper Understanding of Neural Networks: The Power of Initialization and a Dual View on Expressivity. arXiv:1602.05897, 2016.\nTimothy P Lillicrap, Daniel Cownden, Douglas B Tweed, and Colin J Akerman. Random feedback weights support learning in deep neural networks. arXiv:1411.0247. 2014..\nDavid Sussillo and LF Abbott. Random walks: Training very deep nonlinear feed-forward networks with smart initialization. CoRR. vol. abs/1412.6558. 2014\nHere we present derivations of results from throughout the paper"}, {"section_index": "9", "section_name": "Result:", "section_text": "Consider the recurrence relation for the variance of a single input\nVq*z\n. laa\n1 a fixed point of the dynamics, q*. qaa can be expanded about the fixed point to yield the mptotic recurrence relation,\nWe begin by first expanding to order e'\n+ 0? + O((e')2 q*z)$'(q*z)+O(( Dzz$(Vq*z)'(Vq*z) + O((e)2)\nDzz$(v ) +O((e)2) a"}, {"section_index": "10", "section_name": "Result:", "section_text": "Dz1Dz2$(u1)$(u2) + 0p\nDz1Dz2$'(u1)$'(u2 +"}, {"section_index": "11", "section_name": "Derivation:", "section_text": "2 _ 2c*e e'z1 +O(e2\nSince the relaxation of qaa and qhb to q* occurs much more quickly than the convergence of qab we approximate qaa = qbb = q* as inPoole et al.(2016). We therefore consider the perturbation = c* + e'. It follows that we may make the approximation,. q*=\nWe now consider the case where c* < 1 and c* = 1 separately; we will later show that these two results agree with one another. First we consider the case where c* < 1 in which case we may safely expand the above equation to get,.\nThis allows us to in turn approximate the recurrence relation\nl+1 ab\n$(u*)$' (u)\nwhere u* and u* are appropriately defined asymptotic random variables. This leads to the asymptotic recurrence relation,\nDz1Dz2$'(u*)$'(u*\n+1 Dz1Dz2$(u*)$(u) + op ab\nIt follows that the asy mptotic recurrence relation in this case will be\nDz a*\nwhere X1 is the stability condition for the ordered phase. We note that although the approximations were somewhat different the asymptotic recurrence relation for c* < 1 reduces eq.47|result for. c* = 1. We may therefore use4for all c*"}, {"section_index": "12", "section_name": "Result:", "section_text": "l+1 Dz1Dz2$(u*)$(u5) + 0j (34) ab 9* (35) (36) Dz1Dz2z1$(u*)'(u* )$'(u* (37) Dz1Dz2($'(u*)$'(u*)+ c* 38) D z1Dz20 (u*) (39) 1\n=Vq*z1+1 ).a* Z2 - Vq*e'z1 + O(e3/2\nand so the lowest order correction is of order O(e) as opposed to O(e). As usual we now expand the recurrence relation, noting that u* = u* is independent of z, when c* = 1 to find.\nDziDzz (u*)$(u)+j (42) ab q* $(u*) (u*)+ q*e'z?$\"(u*) 2q*e'z2 1 (43) (44) (45) (46) will he\nIn the presence of dropout with rate p, the variance of a single input as it is passed through the network is described by the recurrence relation."}, {"section_index": "13", "section_name": "Derivation:", "section_text": "Recall that the recurrence relation for the pre-activations is given by\n1 WlP}y}+b\nqaa =E[(z)2] 1 E{(W{,)]E[(p})]E[(y})] +E[(6})] W\nwhere we have used the fact that E[(p )2] = p"}, {"section_index": "14", "section_name": "Result:", "section_text": "The co-variance between two signals, z4,a and zh,b, with separate i.i.d. dropout masks ph,a and ph, is given by,\nTab = Dz1Dz2$(u1)$(u2) + 0b"}, {"section_index": "15", "section_name": "Derivation:", "section_text": "Proceeding directly we find that\n1 E[(W})]E[p};a]E[p};b]E[y};aY}j;b] + E[b] Dz1Dz2$(u1)(u2) + 06\nwhere we have used the fact that E[ph:a] = E[p:s] = p. We have also used the same substitution for E[y'gy':h] used in the original mean field calculation with the appropriate substitution.\n7.5 THE LACK OF A c* = 1 FIXED POINT WITH DROPOUT"}, {"section_index": "16", "section_name": "Result:", "section_text": "= 1 then it follows that\nto eq.4] u1 = ql..z1 and u2 :\nDz$2 (Vq*z ab oq\nPlugging in c! 1 with a q* we find that u1. /q* z1. It follows that.\n1ab 9* 1 a* z +o q* 1 1 0 W 9* Dz oq\nas required. Here we have integrated out z2 since nether u1 nor u2 depend on it"}, {"section_index": "17", "section_name": "Result:", "section_text": "In mean field theory the expected magnitude of the gradient ||w! E|? will be proportional to E[(8,)21"}, {"section_index": "18", "section_name": "Derivation:", "section_text": "We first note that since the W!. . are i.i.d. it follows that\nand the result follows"}, {"section_index": "19", "section_name": "Result:", "section_text": "In mean field theory the recursion relation for the variance of the errors, q' - E[(s,)?] is given by\n+1 qab (57) ab 1 (58) (59) 9x 1 (60) q* (61) )zd q*z oq\n2 dE l|Vw!E|= aw! 2 dE ~ NN+1E aw!\nwhere we have used the fact that the first line is related to the sample expectation over the different realizations of the W,, to approximate it by the analytic expectation in the second line. In mean field. theory since the pre-activations in each layer are assumed to be i.i.d. Gaussian it follows that,.\n2 dE E E8)]E[(z] aw\nl+1\nComputing the variance directly and using mean field approximation..\n`E[(5+1)] Eo 2 ~l+1 ET wlaa +2 2 ~l+1 ow9aa N1 a X1\nas required. In the last step we have made the approximation that qag ~ q* since the depth scale for the variance is short ranged."}, {"section_index": "20", "section_name": "Derivation:", "section_text": "dEa dEb Aw!, aw!. ij IJ dEa dEb ~ NN+1E Ow!. ow!\n0Ea dEb JE E[8};a;b]E[b(zi;a)(2 aw!. aw!\nand the result follows"}, {"section_index": "21", "section_name": "Result:", "section_text": "Dz1Dz2$'(u1)$'(u2 \\/ N1+2\nunder backpropagation"}, {"section_index": "22", "section_name": "Derivation", "section_text": "`E[g}+16}+1]E[(W}+1)2 b = i:b) Dz1Dz2$'(u1)$'(u2 qab N\nqa=E[(8};a)]=E[('(zi;a))]E[(8}+1)]E[(W}+1)2 (66) O w E[(s] (67) ~ l+1 (68) aa Owlaa 2 ~ l- (69) qaa (70) Ni+2\nIn mean field theory we expect the covariance between the gradients of two different inputs to scale\nEa) :(Vw!.Eb) ~ E[8i;a0i;b] V W! iJ i7\nWe proceed in a manner analogous to Appendix[7.6] Note that in mean field theory since the weights are i.i.d. it follows that\nwhere, as before, the final term is approximating the sample expectation. Since the weights in the forward and backwards passes are chosen independently it follows that we can factor the expectation.\nHere we include some more experimental figures that investigate the effects of training time, mini mizer, and dataset more closely..\n(a) (b) 102 102 101 101 1.0 1.5 2.0 2.5 3.0 3.5 4.0 1.0 1.5 2.0 2.5 3.0 3.5 4.0 2 2 0 0 w W (c) (d) 102 102 101 101 1.0 1.5 2.0 2.5 3.0 3.5 4.0 1.0 1.5 2.0 2.5 3.0 3.5 4.0 2 2 0 0 W W\nFigure 7: Training accuracy on MNIST after (a) 45 (b) 304 (c) 2048 and (d) 13780 steps of SGL with learning rate 10-3\n(a) (b) 102 102 101 101 1.0 1.5 2.0 2.5 3.0 3.5 4.0 1.0 1.5 2.0 2.5 3.0 3.5 4.0 02 (c) (d) 102 102 101 101 1.0 1.5 2.0 2.5 3.0 3.5 4.0 1.0 1.5 2.0 2.5 3.0 3.5 4.0\nFigure 8: Training accuracy on MNIST after (a) 45 (b) 304 (c) 2048 and (d) 13780 steps of RMSProp with learning rate 10-5"}] |
B1mAJI9gl | [{"section_index": "0", "section_name": "TOWARDS UNDERSTANDING THE INVERTIBILITY OF CONVOLUTIONAL NEURAL NETWORKS", "section_text": "{annacg, yeezhang, kibok, yutingzh, honglak} @umich.edu\nSeveral recent works have empirically observed that Convolutional Neural Nets (CNNs) are (approximately) invertible. To understand this approximate invertibility phenomenon and how to leverage it more effectively, we focus on a theoretical explanation and develop a mathematical model of sparse signal recovery that is consistent with CNNs with random weights. We give an exact connection to a particular model of model-based compressive sensing (and its recovery algorithms and random-weight CNNs. We show empirically that several learned networks are consistent with our mathematical analysis and then demonstrate that with such a simple theoretical framework, we can obtain reasonable reconstruction results on real images. We also discuss gaps between our model assumptions and the CNN trained for classification in practical scenarios."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Deep learning has achieved remarkable success in many technological areas (Bengio et al. 2013 Schmidhuber2015), including computer vision (Krizhevsky et al.]2012) Szegedy et al 2015 Simonyan and Zisserman2015), automatic speech recognition (Hinton et al.[2012) Hannun et al. 2014), natural language processing (Collobert et al.]2011] Mikolov et al.|2013Cho et al. 2014) bioinformatics (Chicco et al.]2014), even high energy particle physics (Baldi et al.2014). In. particular, deep Convolutional Neural Networks (CNNs) (LeCun et al.]1989||Krizhevsky et al.]2012 Simonyan and Zisserman[[2015] have been a critical enabling technique for analyzing images and. sequential data.\nFollowing the unprecedented success of deep networks, there has been some theoretical work. (e.g., Arora et al.[(2014] 2015); Paul and Venkatasubramanian (2014)) that suggest several mathemat ical models for different deep learning architectures. However, theoretical analysis and understanding lag behind the very rapid evolution and empirical success of deep architectures, and more theoretical analysis is needed to better understand the state-of-the-art deep architectures, and possibly to improve them further.\nThis property is intriguing because convolutional neural networks are typically trained with discrimi native objectives (i.e., unrelated to reconstruction) with a large amount of labels, such as the ImageNet dataset. For example,Dosovitskiy and Brox (2016) used upsampling-deconvolutional architectures to invert the hidden activations of feedforward CNNs to the input domain. In other related work Zhao et al.(2016) proposed stacked a what-where network via a (deconvolutional) decoder and demonstrate its promise in unsupervised and semi-supervised settings.Bruna et al.(2014) studied signal discovery from generalized pooling operators using image patches on non-convolutional small scale networks and datasets.Zhang et al.(2016) showed that CNNs discriminately trained for image classification (e.g., VGG Net (Simonyan and Zisserman2015)) are almost fully invertible using pooling switches. Despite these interesting results, there is no clear theoretical explanation as to why CNNs are invertible yet.\nWe introduce three new concepts that, coupled with the accepted notion that images have spars representations, guide our understanding of CNNs:"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In this paper, we attempt to address the gap between the empirical success and theoretical understand ing of the Convolutional Neural Nets, in particular its invertibility (i.e., reconstructing the input from the hidden activations), by analyzing a simplified mathematical model using random weights1.\nIn other words, we give a theoretical connection to a particular model of model-based compressive. sensing (and its recovery algorithms) and CNNs. We show empirically that large-scale deep con. volution networks are consistent with our mathematical analysis. We then demonstrate that with such a simple theoretical framework, we can obtain reasonable reconstruction results on real images. using filters from trained networks. Finally, we observe that it makes a significant difference which filters one uses for encoding and decoding, whether they are trained specifically for reconstruction, or. random, or the same for both procedures. This paper explores these properties and elucidate specific. empirical aspects that any more sophisticated mathematical model should take into account!2."}, {"section_index": "3", "section_name": "2 PRELIMINARIES", "section_text": "In this section, we set the stage for our mathematical analysis in Section[3] We begin with discussior. on the use of random weights in (convolutional) neural networks, and then provide the definition and models for CNNs. Then, we discuss compressive sensing and sparse signal recovery. We define a. particular model of sparsity that we will use throughout our analysis and detail the Iterative Harc. Thresholding (IHT) algorithm which is the basis of our reconstruction analysis..\nIn order to simplify our notation and to make clear our analysis, we focus on a single layer in the analysis instead of multiple layers!3|Also, we assume that all of our input signals are vectors rather than matrices and that any operations we would ordinarily carry out on images (e.g., convolving with a filter bank, dividing into regions over which we pool coefficients), we do on vectors with the appropriate modifications for a simplified structure. While these assumptions ease our exposition they do not change the nature of our arguments nor their implications for images. Furthermore, we demonstrate the validity of our results in two-dimensional natural images."}, {"section_index": "4", "section_name": "2.1 EFFECTIVENESS OF GAUSSIAN RANDOM FILTERS", "section_text": "We analyze theoretically CNNs with Gaussian random filters, which have been surprisingly effective in unsupervised and supervised deep learning tasks. Jarrett et al.(2009) showed that random filters in 2-layer CNNs work well for image classification. In addition, Saxe et al.(2011) observed tha convolutional layer followed by pooling layer is frequency selective and translation invariant, ever with random filters, and these properties lead to good performance for object recognition tasks On the other hand,Giryes et al.(2016) proved that CNNs with random Gaussian filters have metric preservation property, and they argued that the role of training is to select better hyperplanes discriminating classes by distorting boundary points among classes. According to their observation random filters are in fact a good choice if training data are initially well-separated. Also,He et al (2016) empirically showed that random weight CNNs can do image reconstruction well.\n2we note that our model may not be an exact replica of a real setting, but for mathematical analysis, it is a simplified but representative abstraction of practical settings. A number of works show that random weight CNNs still achieve surprisingly good classification accuracy although they may not match the state-of-the-art results; see Sections2.1and 4.1|for more discussion.\n3we can extend the equivalency on a single layer of CNNs to multiple layer CNNs simply by using the outpu on one layer as the input to another, still using the steps of the inner loop of IHT.\n1. we provide a particular model of sparse linear combinations of the learned filters that are consistent with natural images; also, this model of sparsity is itself consistent with the feedforward network; 2. we show that the effective matrices that capture explicitly the convolution of multiple filters. exhibit a model-Restricted Isometry Property (model-RIP) (Baraniuk et al.]2010); and. 3. our model can explain each layer of the feedforward CNN algorithm as one iteration. of Iterative Hard Thresholding (IHT) (Blumensath and Davies 2009) for model-based. compressive sensing and, hence, we can reconstruct the input simply and accurately.\nTo better demonstrate the effectiveness of Gaussian random CNNs, we evaluate their classification performance on CIFAR-10: see Section 4.1|for details. We find that a 3-layer Gaussian random CNN is able to achieve ~ 75% accuracy on the test set, with only the last classifier layer optimized, (see Table 1|for more details). Even though this number is far from the state-of-the-art results, it is surprisingly good considering the networks are almost untrained. Our theoretical results may provide another new perspective on explaining these phenomena.\nover M channels with a filter bank consisting of K different filters. Note that a filter bank has K filters of size l M, such that there are lM K parameters in this architecture.."}, {"section_index": "5", "section_name": "2.2 CONVOLUTIONAL NEURAL NETS", "section_text": "We define a single layer of our CNN as follows. We assume that the input signal x consists of M channels, each of length D, and we write x E RM D. For each of the input channels, m = 1, . .., M. let wi.m, i = 1,..., K denote one of K filters, each of length l. Let t be the stride length, th number of indices by which we shift each filter. Note that t can be larger than 1..\nWe assume that the number of shifts, n = (D - l)/t + 1, is an integer. Let w3,m be a vector of lengtl D that consists of the (i, m)-th filter shifted by jt, j = 0, . .. , n -- 1 (i.e., wt.m has at most l non-zerc entries). We will concatenate over the M channels each of these vectors (as row vectors) to form a large matrix, W, which is the Kn M D matrix made up of K blocks of the n shifts of each filte in each of M channels. We assume that Kn > MD. We also assume that the Kn row vectors of W span RM D and that we have normalized the rows so that they have unit l2 norm. We assume that the hidden units of the feed-forward CNN are computed by multiplying an input signal x E RM D by the matrix W (i.e., convolving, in each channel, by a filter bank of size K, and summing over the channels to obtain Kn outputs), applying the ReLU function to the Kn outputs, and then selecting the value with maximum absolute value in each of the K blocks; i.e., we perform max pooling ove each of the convolved filters and sum over the channels|We use h = W x for the hidden activatior computed by a single layer CNN without pooling. Figure1illustrates the architecture."}, {"section_index": "6", "section_name": "2.3 COMPRESSIVE SENSING", "section_text": "4The convolution can be computed more efficiently than a straight-forward matrix multiplication, but the are mathematically equivalent.\nwe note that this is a sufficient condition and that there are other, less restrictive sufficient conditions, as. well as more complicated necessary conditions. Furthermore, we have not given the exact, quantitative relations amongst the parameters. For simplicity, we stick with this definition..\nW x h D MD n Kn (WK( L (WK.M)T* j=0,...,n-1\nLet be a i j matrix with j > i. We say that satisfies the Restricted Isometry Property RIP(k, dk. (or, just RIP) if there is a distortion factor ok > O such that for all z E R with exactly k non-zero. entries, (1 dk)||z|? z|? (1 + &k)|z?. If satisfies RIP (for appropriate sparsity level k and sufficiently small d) and if z E R is k-sparse, then, given the vector x = z E R', we. can efficiently recover z (see|Candes[(2008) for more details)] There are many efficient algorithms. for doing so, including l1 sparse coding (e.g., l2 minimization with l1 regularization) and greedy,. iterative algorithms (such as Iterative Hard Thresholding or IHT)..\nModel-based compressive sensing. While sparse signals are a natural model for some applications. they are less realistic for CNNs. We consider a vector z E IR Kn as the true sparse code for generating. the CNN input x with a particular model of sparsity. Rather than permitting k non-zero entries. anywhere in the vector z, we divide the support of z into K contiguous blocks of size n and we. stipulate that from each block there is at most one non-zero entry in z with a total of k non-zero\n(1ok)lzll<l|zl?< (1+ 8k)lzl?\nFor our analysis, we also need matrices that satisfy the model-RIP condition for vectors z E M? We denote the distortion factor d2k for such matrices. Note that dk d2k < 1.\nM(z, k) = block-sparsify(upsample(max-pool(z), s), k)\nwhere s denotes the upsampling switches that indicate where to place the non-zero values in the upsampled activations. Taking the pooling switches known from the max pooling operation as s, we specifically define M as the nesting of the max pooling and the unpooling with known switch. We. define this special case as\nAlternatively, using the fixed uniform switches as s, we specifically define MI as the nesting of the max pooling and the naive unsampling, denoted by Mfxed. In the rest of this paper, our theoretica. analysis are generic to any type of valid upsampling switchesI so we use MI(z, k) to denote the structured sparse approximation algorithm without worrying about s. The two special cases Mknowr and Mfxed are used in the empirical analysis when we need to specify M(z, k) as a fully concrete Operator.\nThe main recovery algorithm that we focus on is a model-sparse version of Iterative Hard Thresholding. (IHT) (see Blumensath and Davies(2009)), not because we are interested in recovering model sparse signals, per se, but because one iteration of IHT for our model of sparsity captures exactly\n6Valid switches should place a non-zero value at exactly one location\nentries. We call a vector with this sparsity model model-k-sparse and denote the union of all k analysis, we consider linear combinations of two model-k-sparse signals. To be precise, suppose that. z = Q1 z1 + Qz2 is the linear combination of two elements in Mk. Then, we say that z lies in the linear subspace M? that consists of all linear combinations of vectors from Mk..\nWe say that a matrix satisfies the model-RIP condition for parameter k if, there is a distortior actor dk. > 0 such that, for all z E Mk\nSeeBaraniuk et al.(2010) for the definitions of model sparse and model-RIP, as well as the necessary modifications to account for signal noise and compressible (as opposed to exactly sparse) signals (which we have neglected to consider to keep our analysis simple). Intuitively speaking, a matrix that satisfies the model-RIP is a nearly an orthonormal matrix for a particular set of sparse vectors with a particular sparsity model or pattern.\nMany efncient algorithms have been proposed for sparse coding and compressive sensing (Olshausen et al.l|1996. Mallat and Zhang 1993Beck and Teboulle2009). As with traditional compressive sensing, there are efficient. algorithms for recovering model-k-sparse signals from. measurements (seeBaraniuk et al.(2010)), assuming the existence of an efficient structured sparse approximation. algorithm MI, that given an input vector and the sparsity. parameter, returns the vector closest to the input with the. specified sparsity structure.\nendwnne the activations of the original size by retaining the mos1 8: return z <Zi significant values. The max pooling can be viewed as two steps: 1) zeroing out the locally non-maximum values 2) downsampling the activations with the locally maximum values retained. To study the pooled activations with sparsity structures, we can recover dimension loss from the second step (downsam pling step) by an unsampling operator. This procedure defines our structured sparse approximation algorithm M(z, k), where z is the original (unpooled) code, and k is the sparsity parameter for further sparsification, which guarantees that M(z, k) is a model-k-sparse signal. With the standard layered formulation for neural networks, we have"}, {"section_index": "7", "section_name": "3 ANALYSIS", "section_text": "To motivate our more formal analysis, we begin with a simple example. Suppose that the matrix W is an orthonormal basis for IR M D and define = W1\nProposition 1. A one-layer CNN using the matrix T, with no pooling, gives perfect reconstruction (with the matrix ) for any input vector x E RMD\nW1 (h+ -h_) = WTh = WTWx = x\nIn the example above, we have pairs of vectors (w, w) in our matrix . This settings allow us t turn what would ordinarily be a nonlinear function, ReLU, into a linear one. In fact, the assumptio. that trained CNN filters come in positive and negative is validated by|Shang et al.(2016), whicl. makes a CNN much easier to analyze within the model compressed sensing framework..\nSuppose that we have a vector z that we split into positive and negative components, z = Z+ and that we synthesize (or construct) a signal x from z using the matrix [wT WT]. Then, we have\nWT z+-z_)=WTz=x"}, {"section_index": "8", "section_name": "3.1 MODEL-RIP AND RANDOM FILTERS", "section_text": "Our first main result says that if we use Gaussian random filters in our CNN, then, with high. probability, the transpose of the matrix W formed by the convolutions with these filters has the. model-RIP property. In other words, Gaussian random filters generate a matrix whose transpose. wT is almost an orthonormal transform for sparse signals with a particular sparsity pattern (that. is consistent with our pooling procedure). The bounds in the theorem tell us that we must balance. the size of the filters l and the number of channels M against the sparsity of the hidden units k, the number of the filter banks K, the number of shifts n, the distortion parameter dk, and the failure. probability e. The proof is in Appendix|A.\n7Multiple iterations of IHT can improve the quality of signal recovery. However, it is rather equivalent to the recurrent version of CNNs and does not fit to the scope of this work.\na feedforward CNN]Algorithm[1describes the model-based IHT algorithm. In particular, the sequence of steps 4-6 in the middle IHT (without the outer iterative loop) is exactly one layer of a feedforward CNN. As a result, the theoretical analysis of IHT for model-based sparse signal recovery serves as a guide for how to analyze the approximation activations of a CNN..\nTheorem 3.1. Assume that we have M K vectors wi,m of length l in which each entry is a scaled. i.i.d. (sub-)Gaussian random variable with mean zero and variance 1 (the scaling factor is 1/ Ml) Let t be the stride length (where n = (D - l)/t + 1) and build the structured random matrix W as the weight matrix in a single layer CNN for M-channel input dimension D. If.\nthen, with probability 1 - e, the M D Kn matrix WT satisfies the model-RIP for model M with parameter ok.\nWe also note that the same analysis can be applied to the sum of two model-k-sparse signals, with changes in the constants (that we do not track here)..\nOther examples of matrices that satisfy model-RIP (both empirically and via a less sophisticated analysis on the dot products between any two columns) include wavelets and localized Fourier bases. both examples that can be easily and efficiently implemented via convolutions in a CNN."}, {"section_index": "9", "section_name": "3.2 RECONSTRUCTION BOUNDS", "section_text": "To distinguish the true sparse code z and its reconstruction, we use = MI(h, k) = M(W x, k) foi the reconstruction by CNN. Our next result tells us that if we compute the hidden units h from an input signal x using a weight matrix W whose transpose has the model-RIP and using max pooling. over each filter (), then we can reconstruct (approximately) the input signal x simply by multiplying. the hidden units by W. This result bounds the relative error between the approximate reconstruction x and the input as a function of the distortion for the model-RIP. In our analysis, we assume that the input signal x = W'T' z is a sparse linear combination of hidden activations, captured approximately. by the filters in W. See Appendix[B for the detailed proofs. Part of our analysis also shows that the. hidden units are approximately the putative coefficient vector z in the sparse linear representation. for the input signal.\nTheorem 3.3.We assume that WT' satisfies the M?-RIP with constant 8k < 82k < 1. If we use W. in a single layer CNN both to compute the hidden units and to reconstruct the input x from these hidden units as x so that x = WTM(W x, k), the error in our reconstruction is."}, {"section_index": "10", "section_name": "EXPERIMENTAL EVIDENCE AND ANALYSIS", "section_text": "In this section, we provide experimental validation of our theoretical model and analysis. We. first validate experimentally the relevance of our assumption by examining the effectiveness of. random filter CNNs. We then provide an experimental validation of our theoretical analysis on the. synthetic 1D case, then we provide experimental results on more realistic scenarios. In particular we study popular deep neural networks trained for image classification on the ImageNet ILSVRC. 2012 dataset (Deng et al.2009). We calculate empirical model-RIP bounds for WT, showing. that they are consistent with theory. Our results are also consistent with a long line of research shows that it is reasonable to model real, natural images as sparse linear combinations over learned dictionaries (e.g.,Boureau et al.(2008); Le et al.(2013); Lee et al.](2008); Olshausen et al.(1996) Ranzato et al.(2007);Yang et al.(2010)). In addition, we verify our theoretical bounds for the reconstruction error x - WT 2/|x2 on real images. (This is the relative l2 distance between the original image and the reconstruction.) We investigate both randomly sampled filters and empirically. learned filters in these experiments. Our implementation is based on the Caffe (Jia et al.|2014) and. MatConvNet (Vedaldi and Lenc|2015) toolboxes.\nM l2 Ck log(K) + log(n) - log(e D\n502k. x - x||2\nRecall that the structured sparsity approximation algorithm MI includes the downsampling caused by. pooling and an unsampling operator. Theorem|3.3|is applicable to any type of upsampling switches,. so our reconstruction bound is generic to the particular design choice on how to recover the activation size in a decoding neural network."}, {"section_index": "11", "section_name": "4.1 EVALUATION OF GAUSSIAN RANDOM CNNS ON CIFAR-10", "section_text": "To show the practical relevance of our theoretical assumptions on using random filters for CNNs as stated in Section[2.1] we evaluate simple CNNs with Gaussian random filters (with i.i.d. zero mean unit-variance entries) on the CIFAR-10 dataset. The goal of this experiment is not to achieve state-of-the-art results, but to examine practical relevance of our assumption on random filter CNNs Once the CNNs weights are initialized (randomly), they are fixed during the training of the classifiers Specifically, we test random CNNs with 1, 2, and 3 convolutional layers, where we use ReLU as the ac- tivation. A 2 2 max pooling layer follows each convolutional layer to down-sample the feature map! We experiment with different filter sizes (3, 5, 7) and numbers of channels (64, 128, 256, 1024, 2048) and report the classification accuracy of the best-performing architectures based on cross-validation in Table[1 We also report the best performance using learnable filters for comparison. More details about the architectures can be found in Section[C.1|of the supplementary materials. We observe the CNNs with Gaussian random filters achieve surprisingly good classification performance (implying that they serve as reasonable representation of input data), although fully learnable CNN counterparts perform better. Our experimental results are also consistent with the observations made by|Jarrett et al (2009) and Saxe et al.(2011). Overall, these results seem to suggest that the CNNs with Gaussian random filters might be a reasonable setup which is amenable to mathematical analysis while not being too far off in terms of practical relevance.\nTable 1: Classification accuracy of CNNs with random and learnable filters on CIFAR-10. A typical laye. consists of four operators: convolution, ReLU, batch normalization and max pooling. Networks with optima filter size and numbers of output channels are used (see Section[C.1|in the supplementary materials for the. architecture details). The random filters, assumed in our theoretical analysis, perform reasonably well, not fai. off the learned filters.\nWe use 1-D synthetic data to empirically show the basic validity of our theory in terms of the model. RIP condition in Equation (1) and reconstruction bound in Theorem[3.3] We plot the histogram. of the empirical model-RIP values of 1D Gaussian random filters W ( scaled by 1/lM ) witl. size l 1 M K = 5 1 32 96 on 1D Mz sparse signal z with size D = 32 and sparsit k = 10. whose non-zero elements are drawn from a uniform distribution on [-1, 1]. The histogram in Figure 2a|and2b|are tightly centered around 1, suggesting that WT satisfies the model-RII condition in Equation (1) and its corollary from Lemma[B.1|in the supplementary materials. We also empirically show the reconstruction bound in Theorem|3.3|on synthetic vectors x = WT (Figure[2c). The reconstruction error is concentrated at around 0.1-0.2 and bound under 0.5. Result in Figure|2|suggests the practical validity of our theory when the model assumptions hold..\nWe conduct the rest of our experimental evaluations on the 16-layer VGGNet (Model D in Simonyan. and Zisserman (2015))where the computation is carried out on images; e.g., convolution with a. 2-D filter bank and pooling on square regions. In contrast to the theory, the realistic network does not. pool activations over all the possible shifts for each filter, but rather on non-overlapping patches. The. networks are trained for the large-scale ImageNet classification task, which is important for extending. to other supervised tasks in vision. The main findings on VGGNet are presented in the rest of this section; we also provide some analysis on AlexNet (Krizhevsky et al.] 2012) in the supplementary. materials.\n9vGGNet is practically important as it is popularly used in the community and is one of the best-performing. 'single-pathway\"' networks (i.e., no skip connections). We expect that the ResNet (e.g., trained from ImagNet. can also reconstruct images from its activations well in practice. However, the ResNet architectures are toc complicated to be in the scope of our theory without further nontrivial customization..\n8Implementation detail: We add a batch normalization layer together with a learnable scale and bias before the activation so that we do not need to tune the scale of the filters. The filter weights of the intermediate layers in the CNNs are not trained after random initialization. On top of the network, we use an optional average pooling layer to reduce the feature map size to 4 4 and a dropout layer for better regularization before feeding the feature to a learnable soft-max classifier for image classification.\n0.3 0.3 0.2 0.25 0.25 0.15 0.2 0.2 0.15 0.15 0.1 0.1 0.1 0.05 0.05 0.05 0 0 0.9 0.95 1 1.05 1.1 0.8 0.9 1 1.1 1.2 0 0.1 0.2 0.3 0.4 (a) (b) (c)\n0.3 0.3 0.2 0.25 0.25 0.15 0.2 0.2 0.15 0.15 0.1 0.1 0.1 0.05 0.05 0.05 0 0.9 0.95 1.05 1.1 0 0.8 0.9 1.1 1.2 0 1 0 0.1 0.2 0.3 0.4 (a) (b) (c)"}, {"section_index": "12", "section_name": "4.4 2D MODEL-RIP", "section_text": "0.3 0.12 0.05 0.25 0.1 0.04 0.2 0.08 0.03 0.15 0.06 0.02 0.1 0.04 0.05 0.02 0.01 0 0 0 0.98 0.99 1 1.01 1.02 1.03 0.85 0.9 0.95 1 1.05 0.9 0.95 1.05 1.1 (a) Random (b) After ReLU (c) Before ReLU\n0.3 0.12 0.05 0.25 0.1 0.04 0.2 0.08 0.03 0.15 0.06 0.02 0.1 0.04 0.05 0.02 0.01 0 0 0 0.98 0.99 1 1.01 1.02 1.03 0.85 0.9 0.95 1 1.05 0.9 0.95 1.05 1.1 (a) Random (b) After ReLU (c) Before ReLU\nFigure 3: For VGGNet's conv(5, 2) filters W, we plot the histogram of ratios |WT z||2/||z||2 (the model-RIP value derived from Equation (1); supposed to be concentrated at 1) where z is a Mx sparse signal. (a) z is randomly generated with the same sparsity as the conv(5, 2) activations and from a uniform distribution for the non-zero magnitude. (b) z is recovered by Algorithm2|from the conv(5,1) activations before applying ReLU (c) z is recovered by Algorithm|2|from the conv(5,1) activations after applying ReLU. The learned filters admits similar model-RIP value distributions to the random filters except for a bit larger bandwidth, which means the model-RIP condition in Equation (1) can empirically hold even when the filters do not necessarily subject to the i.i.d Gaussian random assumption.\nFigure 2: For 1D scaled Gaussian random filters W, we plot the histogram of ratios (a) ||wT z||2/||z||2 (model RIP condition in Equation (1); supposed to be concentrated at 1), (b) ||W WT z||2/||z||2 (model-RIP corollary from Lemma[B.1|in the supplementary materials; supposed to be concentrated at 1), and (c) x - x[2/|x|2 (reconstruction bound in Theorem[3.3 supposed to be small), where z is a Mk sparse signal that generates the vector x and x = WI'Mfxed(W x, k) is the reconstruction of x, where we use the naive unsampling to. recover the reduced dimension due to pooling (see Section[2.3).\nlayer c(1,1) c(1,2) p(1) c(2,1) c(2,2) p(2) c(3,1) c(3,2) c(3,3) p(3) % of non-zeros 49.1 69.7 80.8 67.4 49.7 70.7 53.4 51.9 28.7 45.9 layer c(4,1) c(4,2) c(4,3) p(4) c(5,1) c(5,2) c(5,3) p(5) % of non-zeros 35.6 29.6 12.6 23.1 23.9 20.6 7.3 13.1\nTable 2: Layer-wise sparsity of VGGNet on ILSVRC-2012 validation set. \"c\" stands for convolutional layers while \"p\" represents pooling layers. CNN with random filters in Section4.4|can be simulated with the same sparsity.\nVGGNet contains five groups of convolution and pooling layers, each group has 2~3 convolutional. layers followed by a pooling layer. We denote the j-th convolutional layer in the i-th group \"conv(i, j),. and the pooling layer \"pool(i).\" When we say the activations/features are from i-th layer, we mean they are the output of pool(i). Our analysis is for single convolutional layers. When evaluating the i-th layer, we take the activations from the (i - 1)-th layer, and investigate the filters and output of conv(i, 1).\nThe key to our reconstruction bound is Theorem|3.3|is the model-RIP condition for our particular model of sparsity in Equation (1). We empirically evaluate the model-RIP property, i.e., || W'T z|l/[z for real CNN filters of the pretrained VGGNet. We use two-dimensional coefficients (or hidden units) z (each block of coefficients is of size D D), K filters of size l l, and pool the coefficients over smaller pooling regions (i.e., not over all possible shifts of each filter). The following experimental evidence suggest that the sparsity model and the model-RIP property of the filters are consistent with what we conclude from the mathematical analysis on the simpler one-dimensional case\nTo check the significance of the model-RIP property (i.e., how close ||wT'z||/l|z| is to 1) in. controlled settings, we first synthesize the hidden activations z with sparse uniform random variables. which fully agree with our model assumptions. The sparsity of z is constrained to the average level of the real CNN activations (refer to Table2). Given the filters of a certain convolutional layer, we use. the synthetic z (in equal position to this layer's output activations) to get statistics for the model-RIP. property. To be consistent with succeeding experiments, we choose conv(5, 2), while other layers.\nshow similar results. Figure[3|(a) summarizes the distribution of empirical model-RIP values, which. is clearly centered around 1 and satisfies Equation (1) with a short tail roughly bounded by dz < 1 For more details of the algorithm, we normalize the filters from the conv(5, 2) layer, which are l l (l = 3). All K = 512 filters with M = 512 input channels are used10|we set D = 15 (the same as the output activations of conv(5, 2)) and use 2 2 pooling regions*(commonly used in recent deep. networks). We generate 100o M randomly sampled sparse activation (z) maps by first sampling. their non-zero supports and then filling elements on the supports uniformly from [-1, 1]. The sparsity is the same as that in conv(5, 1) activations..\nTo gain more insight, we summarize the learned filter coherence in Table4|for all the convolutional. layers in VGGNet|12|This measures the correlation or similarity between the columns of WT and is a. proxy for the value of the model-RIP parameter d (which we can only estimate computationally) The smaller the coherence, the smaller dg is, and the better the reconstruction. The coherence of the learned filters is not low, which is inconsistent with our theoretical assumptions. However. the model-RIP property turns out to be robust to this mismatch. It also demonstrates the strong. invertibility of CNN in practice.."}, {"section_index": "13", "section_name": "4.5 RECONSTRUCTION BOUNDS", "section_text": "With model-RIP as a sufficient condition, Theorem|3.3|provides theoretical bounds for layer-wise reconstruction via x = WTM(W x, k). This operator consists of the projection and reconstruction in one IHT iteration. Without confusion, we refer to it as IHT for notational convenience. We investigate the practical reconstruction errors on Layer 1~4 activations (i.e., pool(1)~(4)) of VGGNet.\nTo encode and reconstruct intermediate activations of CNNs, we employ IHT with sparsity estimatec. from the real CNN activations on ILSVRC-2012 validation set (see Table2). We also reconstruct input images, since CNN inversion is not limited to a single layer, and images are easier to visualize. than hidden activations. To implement image reconstruction, we project the reconstructed activations. into the image space via a pretrained decoding network as in (Zhang et al.|2016), which extends. a similar autoencoder architecture as in (Dosovitskiy and Brox2016) to a stacked \"what-where autoencoder (Zhao et al.]2016). The reconstructed activations were scaled to have the same norm as. the original activations so that we can feed them into the decoding network..\n12The coherence is defined as the maximum (in absolute value) dot product between distinct pairs of column of the matrix WT, i.e. = maxi |W;WTI, where W, denote the i-th row of matrix W.\nAlgorithm 2 Sparse hidden activation recovery\n(a) original image (b) decoding net with original activation (c) IHT with learned filters (d) IHT with random filters (e) decoding net with random activation\nFigure 4: Visualization of images reconstructed by a pretrained decoding network with VGGNet's pool(4 activation reconstructed using different methods: (a) original image, (b) output of the 5-layer decoding networl with original activation, (c) output of the decoding net with reconstructed activation by IHT with learned filters (d) output of the decoding net with reconstructed activation by IHT with Gaussian random filters, (e) output o the decoding net with Gaussian random activation.\nIn Figure4|(c), we take the pretrained conv(5, 1) filters for IHT. The images recovered from the IH. reconstructed 4-th layer activations are reasonable and the reconstruction quality is significantly bette. than the random input baseline. We also try Gaussian random filters (Figure4|(d)), which agree more. with the model assumptions (e.g., lower coherence, see Table4). The learned filters from VGGNe. perform equally well visually. IHT ties the encoder and decoder weights (no filter learning for th decoder), so it does not perform as well as the decoding network trained with a huge batch of data. (Figure[4(b)). Nevertheless, we show both theoretically and experimentally decent reconstructior. bounds for these simple reconstruction methods on real CNNs. More visualization results for mor layers are in the supplementary materials (Figure|5|in Section|C.3).\nIn Table[3] we summarize reconstruction performance for all 4 layers. With random filters, the mode assumptions hold and the IHT reconstruction is the best quantitatively. IHT with real CNN filters performs comparable to the best case and much better than the baseline established by the randomly sampled activations.\nAdditionally, reconstruction performance of IHT is strongly related to the filter coherence, sum marized in Table4 Lower coherence agrees more closely with the model assumptions and leads. to higher reconstruction quality. Higher coherence yields worse recovery of the hidden activation. (i.e., large z, where is the hidden activations recovered by IHT, z is the true activation). Compared to Algorithm2l (one-step) IHT is not so robust to high coherence.."}, {"section_index": "14", "section_name": "5 CONCLUSION", "section_text": "We introduce three concepts that tie together a particular model of compressive sensing (and the associated recovery algorithms), the properties of learned filters, and the empirical observatior\n13The relative error in activation space of random activations (the last column) are identical (1.414) for al layers because ||f - fl/lf|l = 2 on average for Gaussian random f provided ||f| = ||f|l\nAs an example, Figure4|illustrates the image reconstruction results for the hidden activations of the 1-th layer, the ground truth of which is obtained by feeding natural images to the CNNs. Interestingly he decoding network itself is powerful, since it can reconstruct the glimpse of images with Gaussiar andom input, as shown in Figure 4|(e). Object shapes are recovered by using the pooling switche nly in the \"what-where\" autoencoder. This result suggests that it is important to determine whicl ooling units are active and then to estimate these values accurately. These steps are consistent with he steps in the inner loop of any iterative sparse signal reconstruction algorithm.\nIn summary, when the assumption of i.i.d Gaussian randomness of the CNN filters holds, our theoretical reconstruction bound strictly match with the empirical observations. More importantly we demonstrate that the bound can still reasonably hold in practice for discriminatively learned CNN layers, which is particularly true for layers with relatively lower coherence..\nTable 3: Layer-wise relative reconstruction errors by different methods in activation space and image space between reconstructed and original activations. For layer i. we take its activation after pooling from that laye and reconstruct it with different methods (using learned filters from the layer above or scaled Gaussian random filters) and feed the reconstructed activation to a pretrained corresponding decoding network!\nlayer (1,1) (1,2) (2,1) (2,2) (3,1) (3,2) (3,3) coherence of learned filters 0.9427 0.7340 0.6435 0.7465 0.5838 0.4844 0.5194 coherence of random filters 0.6701 0.1218 0.1546 0.1053 0.1099 0.0895 0.0802 layer (4,1) (4,2) (4,3) (5,1) (5,2) (5,3) coherence of learned filters 0.4596 0.4574 0.4043 0.4099 0.4099 0.4046 coherence of random filters 0.0920 0.0619 0.0617 0.0696 0.0674 0.0674\nTable 4: Comparison of coherence between learned filters in each convolutional layer of VGGNet and Gaussian random filters with corresponding sizes.\nthat CNNs are (approximately) invertible. Our experiments show that filters in trained CNNs are. consistent with the mathematical properties we present while the hidden units exhibit a much richer structure than mathematical analysis suggests. Perhaps simply moving towards a compressive, rathe. than exactly sparse, model for the hidden units will capture the sophisticated structure in these layers of a CNN or, perhaps, we need a more sophisticated model. Our experiments also demonstrate that there is considerable information captured in the switch units (or the identities of the non-zeros in the hidden units after pooling) that no mathematical model has yet expressed or explored thoroughly."}, {"section_index": "15", "section_name": "REFERENCES", "section_text": "S. Arora, A. Bhaskara, R. Ge, and T. Ma. Provable Bounds for Learning Some Deep Representations. ICML pages 584-592, 2014. S. Arora, Y. Liang, and T. Ma. Why are deep nets reversible: A simple theory, with implications for training arXiv:1511.05653, 2015. P. Baldi, P. Sadowski, and D. Whiteson. Searching for exotic particles in high-energy physics with deep learning Nature communications, 5, 2014. R. G. Baraniuk, V. Cevher, M. F. Duarte, and C. Hegde. Model-Based Compressive Sensing. IEEE Transactions on Information Theory, 56(4):1982-2001, 2010. A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal of Imaging Science, 2:183-202, 2009. Y. Bengio, A. Courville, and P. Vincent. Representation Learning: A Review and New Perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 2013. T. Blumensath and M. E. Davies. Iterative hard thresholding for compressed sensing. Applied and Computational Harmonic Analysis, 27(3):265-274, 2009. Y.-1. Boureau, Y. L. Cun, et al. Sparse feature learning for deep belief networks. In NIPs, 2008. S. Boyd. l1-norm methods for convex-cardinality problems, ee364b: Convex optimization ii lecture notes, 2014-2015 spring. 2015. J. Bruna, A. Szlam, and Y. LeCun. Signal recovery from pooling representations. In ICML, pages 307-315, 2014. E. J. Candes. The restricted isometry property and its implications for compressed sensing. Comptes Rendus Mathematique, 346(9):589-592, 2008. D. Chicco, P. Sadowski, and P. Baldi. Deep autoencoder neural networks for gene ontology annotation predictions In Proceedings of the 5th ACM Conference Bioinformatics, Computational Biology, and Health Informatics, pages 533-540, 2014. K. Cho, B. Van Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. Learn ing phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.\nimage space relative error activation space relative error layer learned random random learned random random filters filters activations filters filters activations 1 0.423 0.380 0.610 0.895 0.872 1.414 2 0.692 0.438 0.864 0.961 0.926 1.414 3 0.326 0.345 0.652 0.912 0.862 1.414 4 0.379 0.357 0.436 1.051 0.992 1.414\nR. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. Natural language processing. (almost) from scratch. Journal of Machine Learning Research, 12(Aug):2493-2537, 2011. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database.. In CVPR, pages 248-255, June 2009. A. Dosovitskiy and T. Brox. Inverting visual representations with convolutional networks. CVPR, 2016.. R. Giryes, G. Sapiro, and A. M. Bronstein. Deep neural networks with random gaussian weights: A universal classification strategy? IEEE Transactions on Signal Processing, 64(13):3444-3457, 2016. A. Hannun, C. Case, J. Casper, B. Catanzaro, G. Diamos, E. Elsen, R. Prenger, S. Satheesh, S. Sengupta A. Coates, et al. Deep speech: Scaling up end-to-end speech recognition. arXiv preprint arXiv:1412.5567, 2014. K. He, Y. Wang, and J. Hopcroft. A powerful generative model using random weights for the deep image. representation. arXiv preprint arXiv:1606.04801, 2016. G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N.. Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6):82-97, 2012. K. Jarrett, K. Kavukcuoglu, M. Ranzato, and Y. LeCun. What is the best multi-stage architecture for object. recognition? In ICCV, pages 2146-2153, 2009. Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe Convolutional architecture for fast feature embedding. arXiv:1408.5093, 2014. A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, pages 1097-1105, 2012. Q. V. Le, M. Ranzato, R. Monga, M. Devin, K. Chen, G. S. Corrado, J. Dean, and A. Y. Ng. Building high-level. features using large scale unsupervised learning. In ICML, 2013. Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural computation, 1(4):541-551, 1989. H. Lee, C. Ekanadham, and A. Y. Ng. Sparse deep belief net model for visual area v2. In NIPs, 2008.. S. Mallat and Z. Zhang. Matching pursuits with time-frequency dictionaries. IEEE Transactions on Signal. Processing, 41:3397 - 3415, 1993. T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of words and phrases. and their compositionality. In Advances in neural information processing systems, pages 3111-3119, 2013. B. A. Olshausen et al. Emergence of simple-cell receptive field properties by learning a sparse code for natural. images. Nature, 381(6583):607-609, 1996. J. Y. Park, H. L. Yap, C. Rozell, and M. B. Wakin. Concentration of Measure for Block Diagonal Matrices With Applications to Compressive Signal Processing. IEEE Transactions on Signal Processing, 59(12):5859-5875, 2011. A. Paul and S. Venkatasubramanian. Why does Deep Learning work? - A perspective from Group Theory. arXiv.org, Dec. 2014. M. A. Ranzato, F. J. Huang, Y.-L. Boureau, and Y. LeCun. Unsupervised learning of invariant feature hierarchies. with applications to object recognition. In CVPR, 2007. A. Saxe, P. W. Koh, Z. Chen, M. Bhand, B. Suresh, and A. Y. Ng. On random weights and unsupervised feature. learning. In ICML, pages 1089-1096, 2011. J. Schmidhuber. Deep learning in neural networks: An overview. Neural Networks, 2015.. W. Shang, K. Sohn, D. Almeida, and H. Lee. Understanding and improving convolutional neural networks via. concatenated rectified linear units. In ICML, 2016. K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR,. 2015. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1-9, 2015. A. Vedaldi and K. Lenc. Matconvnet - convolutional neural networks for matlab. In Proceeding of the ACM Int.. Conf. on Multimedia, 2015. R. Vershynin. Introduction to the non-asymptotic analysis of random matrices. arXiv.org, Nov. 2010.. J. Yang, J. Wright, T. S. Huang, and Y. Ma. Image super-resolution via sparse representation. Image Processing,. IEEE Transactions on, 19(11):2861-2873, 2010. Y. Zhang, K. Lee, and H. Lee. Augmenting neural networks with reconstructive decoding pathways for. large-scale image classification. In ICML, 2016. J. Zhao, M. Mathieu, R. Goroshin, and Y. Lecun. Stacked what-where auto-encoders. arXiv:1506.02351, 2016."}, {"section_index": "16", "section_name": "Supplementary Materials: Towards Understanding the Invertibility of Convolutional Neural Networks", "section_text": "MATHEMATICAL ANALYSIS: MODEL-RIP AND RANDOM FILTERS\nTheorem|3.1(Restated) Assume that we have M K vectors wi,m of length l in which each entry is. a scaled i.i.d. (sub-)Gaussian random variable with mean zero and variance 1 (the scaling factor is 1/ Ml). Let t be the stride length (where n = (D - l)/t + 1) and build the structured random matrix W as the weight matrix in a single layer CNN for M-channel input dimension D. If.\nMl2 Ck log(K) + log(n) - log(e) D\nthen, with probability 1 - e, the M D Kn matrix WT satisfies the model-RIP for model M with parameter ok.\nProof. We note that this result follows the same structure of that for many proofs of the RIP for (structured) random matrices (see Park et al.(2011);Vershynin (2010) for details) although we make minor tweaks to account for the particular structure of W'\nSuppose that z E M which means that z consists of at most k non-zero entries that each appear ir a distinct block of size n (there are a total of K blocks). First, we observe that the norm of WT z is preserved in expectation.\nE(||WTz|?)= l|zll]\nE - 0 if j1 j2 or m1 m2, and we normalized the random i,m1 ,m that E 1 for all j. Finally, we have. E(l|WTz|?) =E(zTwWTz) =zTE(WWT) z=zTz=|z||2\nE(exp(tZ)) < exp(t-C2)\nfor all t E R and some constant C. The sub-Gaussian norm of Z, denoted ||Z||/, is\n1 lZly2 = sup A p>1\nexp(1-t/C\n14There are two other equivalent properties. See|Vershynin(2010) for detail\nProof. Note that each entry of wT is either zero or Gaussian random variable w ~ N(0,1) (suitably normalized). Therefore, it is obvious that E(W wT) = I since each row of W satisfies\nLet y = WT z. We aim to show that the square norm of the random variable I|yll? concentrates tightly about its mean; i.e., with exceedingly low probability.\nlyll? - l|zll? >o||z|2\nTo do so, we need several properties of sub-Gaussian and sub-exponential random variables. A mean-zero sub-Gaussian random variable Z has a moment generating function that satisfies\nKn Yi j=1\nand observe that y; 1s a linear combination of 1.1.d. sub-Gaussian random variables (or it 1s identically equal to O) and, as such, is itself a sub-Gaussian random variable with mean zero and sub-Gaussian. norm ||yill2 C/Me||w||2||z|2 (seeVershynin (2010), Lemma 5.9). The structure of the random matrix and how many non-zero entries are in row i of W do enter the more refined bound on. the sub-Gaussian norm of ||yi||, (again, seeVershynin (2010), Lemma 5.9 for details) but we ignore uch details for this estimate. . or the next estimate\nM D Ct2 Ct P ai(Y-EY2 2 exp min T2|[a T||a VC i=1\nM D lyll2- l|z3]>o||z lyi|l,(y?-Ey?) P i=1\nM D CD|w|$|z|2 C|w||?||z|l2 Hall?-yilly2 and la|| M l2 Ml i=1\nMl28 k(log(K) + log(n) < exp(log(e) exr\nM l2 log(K) + log(n) - log( K) + log(n) - log(e D\nTherefore, if design our matrix W as described and with the parameter relationship as above, the matrix WT with satisfy the model-RIP for M and parameter with probability 1 e..\nLet us discuss the relationship amongst the parameters in our result. First, if we have only one channel. M = 1 and the filter length l = D, then our bound on the number of measurements D matches those of traditional (model-based) compressive sensing; namely\nD log(K) + log(n) - log(e))\nIf l < D (i.e., the filters are much shorter than the length of the input signal as in a CNN), then we. can compensate by adding more channels; i.e., the filter length l needs to be larger than D, or, if add more channels, / D/M.\nC82Ml2 C8Ml [yll2lIzlI2> o||zlI?) 2 exp C min D||w|, w"}, {"section_index": "17", "section_name": "MATHEMATICAL ANALYSIS: RECONSTRUCTION BOUNDS", "section_text": "The consequences of having model-RIP are two-fold. The first is that if we assume that an input image is the structured sparse linear combination of filters, x = WT z (where z E M and WT satisfies the model-RIP property), then we know an upper and lower bound on the norm of x in terms. of the norm of its sparse coefficients, ||x||2 (1 )||z||2. Additionally,\nTheorem3.3(Restated) We assume that WT satisfies the M?-RIP with constant dk d2k < 1. If we use W in a single layer CNN both to compute the hidden units and to reconstruct the input x from these hidden units as x so that x = WTM(Wx, k), the error in our reconstruction is.\nLemma B.1. Suppose WT has Mg-RIP with constant Sk. Let be a support corresponding to a subspace in Mk. Then we have the following bounds:.\nl|Wgx||2 V1+0k||x||2 l|WgWz||2 (1+ 0k)|Iz|I2 |WWz|2 (1- 0k)||z||2\nLemma B.2. Suppose that WT has M?-RIP with constant S2k. Let be a support correspondin to a subspace of Mg and suppose that z E Mg (not necessarily supported on 9). Then\nW2WTz|gc|l2 < d2k||z|nc||2\nProof. Let ho and h be the vector h restricted to the support sets and II, respectively. Since both are support sets for M and since is the best support set for h,\n[h - h22 < h - hn2\nand. after several calculations. we have\n1 Iz||2\nwe can see that the computation of h is nothing other than the first step of a reconstruction algorithm analogous to that of model-based compressed sensing. As a result, we have a bound on the error between h and z and we see that we can analyze the approximation properties of a feedfoward CNN and its linear reconstruction algorithm. In particular, we can conclude that a feedforward CNN and a linear reconstruction algorithm provide a good approximation to the original input image.\n502k xx2\nLet II denote the support of the M sparse vector z. Set h = W x and set to be the result of max pooling applied to the vector h, or the best fit (with respect to the l2 norm) to h in the model Mg Let denote the support set of E M. For simplicity, we assume [H] = k = [\nLemma B.3 (Identification). The support set, Q, of the switch units captures a significant fraction of the total energy in the coefficient vector z\n2d2k l|z|nc|2 z2 dk\n[h|a\\n|l? ||h|n\\a|l2\nhg\\n2 =W2\\nW1 z2 < 02kz]\nWe can bound the other side of the inequality as\nhn\\22 |Wn\\n(W zn\\2)||2Wn\\2(W z2)2 (1dk)z1\\2202kz22\nd2kz2 > (1 - dk)z2c2 - 02kz22\nTo set the value of on its support set , we simply set = h| and |c = 0. The\n5d2k l|z-2||2 l|z||2 dk\nProof. First, note that I - WoW?l < dk since\nWz (1dk) sup Omax )=0max(W2W)) <(1+0k) ||z|I Iz|F0\nwhere max is the maximum singular value. Therefore\nz 22 znc2+zn- 2n2 =||z|n||2+||z|n- Wn(WTz|n+WTz|ne)|I2 <l|z|n||2+ll(I-WWT)z|n|2 +|WaWTz|n||2 ||z|2c|2+ ||I- WW}l|2||z|2|2 + 02k|z|2c|2 znc2+ 0kz22+ 02kznc2 2d2k 502\nFinally, if we use the autoencoder formulation to reconstruct the original image x by setting x = WT , we can estimate the reconstruction error. We note that is M-sparse by construction anc remind the reader that WT satisfies M?-model-RIP with constants dk d2k < 1. Then, using LemmaB.4as well as the M?-sparse properties of wT\nThis proves that a feedforward CNN with a linear reconstruction algorithm is an approximate autoencoder and bounds the reconstruction error of the input image in terms of the geometric properties of the filters.\n282k l|z|nc|2 lz|2 dk\nx-x||2 =|WT(z-2)||2 <V1+ d2k|z- z||2 502 +02k||z2 502 O2k 1- O2k\nIn this section, we provide more details on the network architectures that we used in Table1 Ir particular, we describe the best performing architectures for all cases in Table[5]\n(1024)5c-2pmax-4pave 68.1%\nTable 5: Best-performing architecture and classification accuracy of random CNNs on CIFAR-10. \"([n])[k]c denotes a convolution layer with a stride 1, a kernel size [k] and [n] output channels, \"[k]pmax\" denotes a max pooling layer with a kernel size [k] and a stride [k], and \"[k]pave\" denotes a average pooling layer. A typical layer consists of four operations, namely convolution, ReLU, batch normalization, and max pooling.\nWe present coherence (see Table[6) and sparsity level (see Table[7) for each layer in AlexNet\nTable 6: Comparison of coherence between learned filters in each layer of AlexNet and Gaussian random filters with corresponding sizes.\nMethod #Layers 1 layers 2 layers 3 layers Best (2048)3c-2pmax- (2048)3c-2pmax-(2048)3c- (2048)5c-2pmax-4Pave Random filters param. (2048)3c-2pmax-2pave 2pmax-(1024)3c-2pmax Accuracy 66.5% 74.6% 74.8% Best (1024)3c-2pmax- (1024)3c-2pmax-(1024)3c- (1024)5c-2pmax-4pave Learned filters param. (1024)3c-2pmax-2Pave 2Pmax-(1024)3c-2Pmax Accuracy 68.1% 83.3% 89.3%\n(1024)3c-2pmax- (1024)3c-2pmax-2pave 83.3%\nLayer 1 (a) original image (b) with original activatione (c) IHT with learned filterse d IHT with random filters. (e) with random activatione Layer 2 (a) original image (b) (c) IHT with learned filterse (d) IHT with random filters (e) decoding nef with random activatione Layer 3 (a) original image (b) c) IHT with learned filterse d IHT with random filters. (e) Layer 4 (a) original image (b) decoding net with original activatione (c) IHT with leamed filterse d IHT with random filters. (e) decoding nef with random activatione\nFigure 5: Visualization of images reconstructed by a pretrained decoding network with VGGNet's pool(4 activation reconstructed using different methods: (a) original image, (b) output of the 5-layer decoding networl with original activation, (c) output of the decoding net with reconstructed activation by IHT with learned filters (d) output of the decoding net with reconstructed activation by IHT with Gaussian random filters, (e) output of the decoding net with Gaussian random activation."}] |
Hkg4TI9xl | [{"section_index": "0", "section_name": "A BASELINE FOR DETECTING MISCLASSIFIED AND OUT-OF-DISTRIBUTION EXAMPLES IN NEURAL NETWORKS", "section_text": "Dan Hendrycks\nUniversity of California, Berkeley hendrycks@berkeley.edu\nUniversity of California, Berkeley\nWe consider the two related problems of detecting if an example is misclassified o1 out-of-distribution. We present a simple baseline that utilizes probabilities from softmax distributions. Correctly classified examples tend to have greater maxi- mum softmax probabilities than erroneously classified and out-of-distribution ex amples, allowing for their detection. We assess performance by defining sev eral tasks in computer vision, natural language processing, and automatic speech recognition, showing the effectiveness of this baseline across all. We then show the baseline can sometimes be surpassed, demonstrating the room for future re search on these underexplored detection tasks."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "When machine learning classifiers are employed in real-world tasks, they tend to fail when th training and test distributions differ. Worse, these classifiers often fail silently by providing high confidence predictions while being woefully incorrect (Goodfellow et al., 2015; Amodei et al. 2016). Classifiers failing to indicate when they are likely mistaken can limit their adoption o cause serious accidents. For example, a medical diagnosis model may consistently classify witl high confidence, even while it should flag difficult examples for human intervention. The resultin unflagged, erroneous diagnoses could blockade future machine learning technologies in medicine More generally and importantly, estimating when a model is in error is of great concern to AI Safet (Amodei et al., 2016).\nThese high-confidence predictions are frequently produced by softmaxes because softmax probabil. ities are computed with the fast-growing exponential function. Thus minor additions to the softmax. inputs, i.e. the logits, can lead to substantial changes in the output distribution. Since the soft. max function is a smooth approximation of an indicator function, it is uncommon to see a uniform. distribution outputted for out-of-distribution examples. Indeed, random Gaussian noise fed into ar MNIST image classifier gives a \"prediction confidence\" or predicted class probability of 91%, as we. show later. Throughout our experiments we establish that the prediction probability from a softmax. distribution has a poor direct correspondence to confidence. This is consistent with a great deal of. anecdotal evidence from researchers (Nguyen & O'Connor, 2015; Yu et al., 2010; Provost et al.. 1998; Nguyen et al., 2015).\nHowever, in this work we also show the prediction probability of incorrect and out-of-distributior examples tends to be lower than the prediction probability for correct examples. Therefore, cap turing prediction probability statistics about correct or in-sample examples is often sufficient for detecting whether an example is in error or abnormal, even though the prediction probability viewec in isolation can be misleading.\nThese prediction probabilities form our detection baseline, and we demonstrate its efficacy througl various computer vision, natural language processing, and automatic speech recognition tasks While these prediction probabilities create a consistently useful baseline, at times they are less ef fective, revealing room for improvement. To give ideas for future detection research, we contribute\nWork done while the author was at TTIC. Code is available at github.com/hendrycks/error-detection\nKevin Gimpel\nToyota Technological Institute at Chicago kqimpel@ttic.edu"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "one method which outperforms the baseline on some (but not all) tasks. This new method evaluate the quality of a neural network's input reconstruction to determine if an example is abnormal..\nIn addition to the baseline methods, another contribution of this work is the designation of standarc. tasks and evaluation metrics for assessing the automatic detection of errors and out-of-distributior examples. We use a large number of well-studied tasks across three research areas, using standarc. neural network architectures that perform well on them. For out-of-distribution detection, we pro. vide ways to supply the out-of-distribution examples at test time like using images from differen. datasets and realistically distorting inputs. We hope that other researchers will pursue these tasks ir. future work and surpass the performance of our baselines..\nIn summary, while softmax classifier probabilities are not directly useful as confidence estimates estimating model confidence is not as bleak as previously believed. Simple statistics derived from softmax distributions provide a surprisingly effective way to determine whether an example is mis- classified or from a different distribution from the training data, as demonstrated by our experimental results spanning computer vision, natural language processing, and speech recognition tasks. This creates a strong baseline for detecting errors and out-of-distribution examples which we hope future research surpasses.\nIn this paper, we are interested in two related problems. The first is error and success prediction: can we predict whether a trained classifier will make an error on a particular held-out test example can we predict if it will correctly classify said example? The second is in- and out-of-distribution detection: can we predict whether a test example is from a different distribution from the training data; can we predict if it is from within the same distribution? Below we present a simple baseline for solving these two problems. To evaluate our solution, we use two evaluation metrics.\nBefore mentioning the two evaluation metrics, we first note that comparing detectors is not as straightforward as using accuracy. For detection we have two classes, and the detector outputs a score for both the positive and negative class. If the negative class is far more likely than the positive class, a model may always guess the negative class and obtain high accuracy, which can be mislead- ing (Provost et al., 1998). We must then specify a score threshold so that some positive examples are classified correctly, but this depends upon the trade-off between false negatives (fn) and false positives (fp).\nFaced with this issue, we employ the Area Under the Receiver Operating Characteristic curve (AU- ROC) metric, which is a threshold-independent performance evaluation (Davis & Goadrich, 2006). The ROC curve is a graph showing the true positive rate (tpr = tp/(tp + fn)) and the false positive. rate (fpr = fp/(fp + tn)) against each other. Moreover, the AUROC can be interpreted as the prob. ability that a positive example has a greater detector score/value than a negative example (Fawcett,. 2005). Consequently, a random positive example detector corresponds to a 50% AUROC, and a. 'perfect\"' classifier corresponds to 100%.2\nThe AUROC sidesteps the issue of threshold selection, as does the Area Under the Precision-Recall curve (AUPR) which is sometimes deemed more informative (Manning & Schutze, 1999). This is because the AUROC is not ideal when the positive class and negative class have greatly differing base rates, and the AUPR adjusts for these different positive and negative base rates. For this reason, the AUPR is our second evaluation metric. The PR curve plots the precision (tp/(tp +fp)) and recall (tp/(tp + fn)) against each other. The baseline detector has an AUPR approximately equal to the precision (Saito & Rehmsmeier, 2015), and a \"perfect\"' classifier has an AUPR of 100%. Conse- quently, the base rate of the positive class greatly influences the AUPR, so for detection we must specify which class is positive. In view of this, we show the AUPRs when we treat success/normal classes as positive, and then we show the areas when we treat the error/abnormal classes as positive. We can treat the error/abnormal classes as positive by multiplying the scores by -1 and labeling them positive. Note that treating error/abnormal classes as positive classes does not change the AU-\n1we consider adversarial example detection techniques in a separate work (Hendrycks & Gimpel, 2016a) 2A debatable, imprecise interpretation of AUROC values may be as follows: 90%-100%: Excellent. 80%90%: Go0d, 70%80%: Fair, 60%-70%: Po0r, 50%-60%: Fail.\nROC since if S is a score for a successfully classified value, and E is the score for an erroneously classified value. AUROC = P(S > E) = P(-E > -S)"}, {"section_index": "3", "section_name": "SOFTMAX PREDICTION PROBABILITY AS A BASELINE", "section_text": "In what follows we retrieve the maximum/predicted class probability from a softmax distributior and thereby detect whether an example is erroneously classified or out-of-distribution. Specifically we separate correctly and incorrectly classified test set examples and, for each example, compute the softmax probability of the predicted class, i.e., the maximum softmax probability.' From these two groups we obtain the area under PR and ROC curves. These areas summarize the performance of a binary classifier discriminating with values/scores (in this case, maximum probabilities fron the softmaxes) across different thresholds. This description treats correctly classified examples a the positive class, denoted \"Success\" or \"Succ\"' in our tables. In \"Error\"' or \"Err' we treat the the incorrectly classified examples as the positive class; to do this we label incorrectly classifie examples as positive and take the negatives of the softmax probabilities of the predicted classes as the scores.\nTable labels aside, we begin experimentation with datasets from vision then consider tasks in natural language processing and automatic speech recognition. In all of the following experiments, the AU- ROCs differ from the random baselines with high statistical significance according to the Wilcoxon. rank-sum test."}, {"section_index": "4", "section_name": "3.1 COMPUTER VISION", "section_text": "In the following computer vision tasks, we use three datasets: MNIST, CIFAR-10, and CIFAR 100 (Krizhevsky, 2009). MNIST is a dataset of handwritten digits, consisting of 60000 training and 10000 testing examples. Meanwhile, CIFAR-10 has colored images belonging to 10 different classes, with 50000 training and 10000 testing examples. CIFAR-100 is more difficult, as it has 100 different classes with 50000 training and 10000 testing examples.\nIn Table 1. we see that correctly classified and incorrectly classified examples are sufficiently distinct and thus allow reliable discrimination. Note that the area under the curves degrade with image recognizer test error.\nNext, let us consider using softmax distributions to determine whether an example is in- or out of-distribution. We use all test set examples as the in-distribution (positive) examples. For out-of distribution (negative) examples, we use realistic images and noise. For CIFAR-10 and CIFAR-100 we use realistic images from the Scene UNderstanding dataset (SUN), which consists of 397 differ ent scenes (Xiao et al., 2010). For MNIST, we use grayscale realistic images from three sources Omniglot (Lake et al., 2015) images are handwritten characters rather than the handwritten digits ir MNIST. Next, notMNIST (Bulatov, 2011) consists of typeface characters. Last of the realistic im ages, CIFAR-1Obw are black and white rescaled CIFAR-1O images. The synthetic \"Gaussian\" dat:\n3We also tried using the KL divergence of the softmax distribution from the uniform distribution for detec. tion. With divergence values, detector AUROCs and AUPRs were highly correlated with AUROCs and AUPR from a detector using the maximum softmax probability. This divergence is similar to entropy (Steinhardt 8 Liang, 2016; Williams & Renals, 1997).\nWe begin our experiments in Section 3 where we describe a simple baseline which uses the maxi- mum probability from the softmax label distribution in neural network classifiers. Then in Section 4. we describe a method that uses an additional, auxiliary model component trained to reconstruct the input.\nFor \"In, we treat the in-distribution, correctly classified test set examples as positive and use the. softmax probability for the predicted class as a score, while for \"Out'' we treat the out-of-distributior examples as positive and use the negative of the aforementioned probability. Since the AUPRs fo. Success, Error, In, Out classifiers depend on the rate of positive examples, we list what area a randon. detector would achieve with \"Base\" values. Also in the upcoming results we list the mean predictec. class probability of wrongly classified examples (Pred Prob Wrong (mean)) to demonstrate that the. softmax prediction probability is a misleading confidence proxy when viewed in isolation. The. Pred. Prob (mean)\" columns show this same shortcoming but for out-of-distribution examples..\nIn-Distribution 7 AUROC AUPR In AUPR Pred. Prob Out-of-Distribution /Base /Base Out/Base (mean) CIFAR-10/SUN 95/50 89/33 97/67 72 CIFAR-10/Gaussian 97/50 98/49 95/51 77 CIFAR-10/All 96/50 88/24 98/76 74 CIFAR-100/SUN 91/50 83/27 96/73 56 CIFAR-100/Gaussian 88/50 92/43 80/57 77 CIFAR-100/All 90/50 81/21 96/79 63 MNIST/Omniglot 96/50 97/52 96/48 86 MNIST/notMNIST 85/50 86/50 88/50 92 MNIST/CIFAR-10bw 95/50 95/50 95/50 87 MNIST/Gaussian 90/50 90/50 91/50 91 MNIST/Uniform 99/50 99/50 98/50 83 MNIST/AIl 91/50 76/20 98/80 89\nTable 2: Distinguishing in- and out-of-distribution test set data for image classification. CIFAR 10/All is the same as CIFAR-10/(SUN, Gaussian). All values are percentages.\nis random normal noise, and \"Uniform' data is random uniform noise. Images are resized when necessary.\nThe results are shown in Table 2. Notice that the mean predicted/maximum class probabilities (Pred Prob (mean)) are above 75%, but if the prediction probability alone is translated to confidence, th softmax distribution should be more uniform for CIFAR-100. This again shows softmax probabil ities should not be viewed as a direct representation of confidence. Fortunately, out-of-distributior examples sufficiently differ in the prediction probabilities from in-distribution examples, allowing for successful detection and generally high area under PR and ROC curves.\nFor reproducibility, let us specify the model architectures. The MNIST classifier is a three-layer. 256 neuron-wide, fully-connected network trained for 30 epochs with Adam (Kingma & Ba, 2015). It uses a GELU nonlinearity (Hendrycks & Gimpel, 2016b), x(x), where (x) is the CDF of the. standard normal distribution. We initialize our weights according to (Hendrycks & Gimpel, 2016c). as it is suited for arbitrary nonlinearities. For CIFAR-10 and CIFAR-100, we train a 40-4 wide residual network (Zagoruyko & Komodakis, 2016) for 50 epochs with stochastic gradient descent using restarts (Loshchilov & Hutter, 2016), the GELU nonlinearity, and standard mirroring and cropping data augmentation."}, {"section_index": "5", "section_name": "3.2 NATURAL LANGUAGE PROCESSING", "section_text": "Let us turn to a variety of tasks and architectures used in natural language processing"}, {"section_index": "6", "section_name": "3.2.1 SENTIMENT CLASSIFICATION", "section_text": "The first NLP task is binary sentiment classification using the IMDB dataset (Maas et al., 2011), a dataset of polarized movie reviews with 25o00 training and 25000 test reviews. This task allows us to determine if classifiers trained on a relatively small dataset still produce informative softmax\nDataset AUROC AUPR AUPR Pred. Prob Test Set /Base Succ/Base Err/Base Wrong(mean) Error MNIST 97/50 100/98 48/1.7 86 1.69 CIFAR-10 93/50 100/95 43/5 80 4.96 CIFAR-100 87/50 96/79 62/21 66 20.7\nTable 1: The softmax predicted class probability allows for discrimination between correctly and incorrectly classified test set examples. \"Pred. Prob Wrong(mean)' is the mean softmax probability for wrongly classified examples, showcasing its shortcoming as a direct measure of confidence Succ/Err Base values are the AUROCs or AUPRs achieved by random classifiers. All entries are percentages.\nTable 3: Detecting correct and incorrect classifications for binary sentiment classification\nIn-Distribution /. AUROC AUPR In AUPR Pred. Prob Out-of-Distribution /Base /Base Out/Base (mean) IMDB/Customer Reviews 95/50 99/89 60/11 62 IMDB/Movie Reviews 94/50 98/72 80/28 63 IMDB/All 94/50 97/66 84/34 63\nTable 4: Distinguishing in- and out-of-distribution test set data for binary sentiment classification IMDB/All is the same as IMDB/(Customer Reviews, Movie Reviews). All values are percentages\ndistributions. For this task we use a linear classifier taking as input the average of trainable, randoml initialized word vectors with dimension 50 (Joulin et al., 2016: Iyyer et al., 2015). We train for 1: epochs with Adam and early stopping based upon 5ooo held-out training reviews. Again, Table 3 shows that the softmax distributions differ between correctly and incorrectly classified examples, sc prediction probabilities allow us to detect reliably which examples are right and wrong.\nNow we use the Customer Review (Hu & Liu, 2004) and Movie Review (Pang et al., 2002) datasets as out-of-distribution examples. The Customer Review dataset has reviews of products rather than only movies, and the Movie Review dataset has snippets from professional movie reviewers rather than full-length amateur reviews. We leave all test set examples from IMDB as in-distribution examples, and out-of-distribution examples are the 500 or 1000 test reviews from Customer Review and Movie Review datasets, respectively. Table 4 displays detection results, showing a similar story to Table 2."}, {"section_index": "7", "section_name": "3.2.2 TEXT CATEGORIZATION", "section_text": "We turn to text categorization tasks to determine whether softmax distributions are useful for de ecting similar but out-of-distribution examples. In the following text categorization tasks, we traii lassifiers to predict the subject of the text they are processing. In the 20 Newsgroups dataset (Lang 995), there are 20 different newsgroup subjects with a total of 20o00 documents for the whol lataset. The Reuters 8 (Lewis et al., 2004) dataset has eight different news subjects with nearl 3000 stories in total. The Reuters 52 dataset has 52 news subjects with slightly over 9000 new tories; this dataset can have as few as three stories for a single subject.\nDataset AUROC AUPR AUPR Pred.Prob Test Set /Base Succ/Base Err/Base Wrong(mean) Error 15 Newsgroups 89/50 99/93 42/7.3 53 7.31 Reuters 6 89/50 100/98 35/2.5 77 2.53 Reuters 40 91/50 99/92 45/7.6 62 7.55\nTable 5: Detecting correct and incorrect classifications for text categorization\nFor the 20 Newsgroups dataset we train a linear classifier on 30-dimensional word vectors for 20 epochs. Meanwhile, Reuters 8 and Retuers 52 use one-layer neural networks with a bag-of-words input and a GELU nonlinearity, all optimized with Adam for 5 epochs. We train on a subset of subjects, leaving out 5 newsgroup subjects from 20 Newsgroups, 2 news subjects from Reuters 8, and 12 news subjects from Reuters 52, leaving the rest as out-of-distribution examples. Table 5 shows that with these datasets and architectures, we can detect errors dependably, and Table 6 informs us that the softmax prediction probabilities allow for detecting out-of-distribution subjects.\nIn-Distribution /. AUROC AUPR AUPR Pred. Prob Out-of-Distribution /Base In/Base Out/Base (mean) 15/5 Newsgroups 75/50 92/84 45/16 65 Reuters6/Reuters2 92/50 100/95 56/4.5 72 95/50 Reuters40/Reuters12 100/93 60/7.2 47\nTable 6: Distinguishing in- and out-of-distribution test set data for text categorizatior\nTable 7: Detecting correct and incorrect classifications for part-of-speech tagging"}, {"section_index": "8", "section_name": "3.2.3 PART-OF-SPEECH TAGGING", "section_text": "Part-of-speech (POS) tagging of newswire and social media text is our next challenge. We use the. Wall Street Journal portion of the Penn Treebank (Marcus et al., 1993) which contains 45 distinct. POS tags. For social media, we use POS-annotated tweets (Gimpel et al., 2011; Owoputi et al.. 2013) which contain 25 tags. For the WSJ tagger, we train a bidirectional long short-term memory. recurrent neural network (Hochreiter & Schmidhuber, 1997) with three layers, 128 neurons pe. layer, with randomly initialized word vectors, and this is trained on 90% of the corpus for 10 epochs with stochastic gradient descent with a batch size of 32. The tweet tagger is simpler, as it is two layer neural network with a GELU nonlinearity, a weight initialization according to (Hendrycks & Gimpel, 2016c), pretrained word vectors trained on a corpus of 56 million tweets (Owoputi et al.. 2013), and a hidden layer size of 256, all while training on 1000 tweets for 30 epochs with Adar. and early stopping with 327 validation tweets. Error detection results are in Table 7. For out-of- distribution detection, we use the WsJ tagger on the tweets as well as weblog data from the English. Web Treebank (Bies et al., 2012). The results are shown in Table 8. Since the weblog data is closer in style to newswire than are the tweets, it is harder to detect whether a weblog sentence is out of-distribution than a tweet. Indeed, since POS tagging is done at the word-level, we are detecting. whether each word is out-of-distribution given the word and contextual features. With this in mind we see that it is easier to detect words as out-of-distribution if they are from tweets than from blogs.\nTable 8: Detecting out-of-distribution tweets and blog articles for part-of-speech tagging. All values are percentages. *These examples are atypically close to the training distribution.."}, {"section_index": "9", "section_name": "3.3 AUTOMATIC SPEECH RECOGNITION", "section_text": "Now we consider a task which uses softmax values to construct entire sequences rather than deter mine an input's class. Our sequence prediction system uses a bidirectional LSTM with two-layer and a clipped GELU nonlinearity, optimized for 60 epochs with RMSProp trained on 80% of th TIMIT corpus (Garofolo et al., 1993). The LSTM is trained with connectionist temporal classifica tion (CTC) (Graves et al., 2006) for predicting sequences of phones given MFCCs, energy, and firs and second deltas of a 25ms frame. When trained with CTC, the LSTM learns to have its phon label probabilities spike momentarily while mostly predicting blank symbols otherwise. In this way the softmax is used differently from typical classification problems, providing a unique test for ou detection methods.\nWe do not show how the system performs on correctness/incorrectness detection because errors are not binary and instead lie along a range of edit distances. However, we can perform out-of\nDataset AUROC AUPR AUPR Pred. Prob Test Set /Base Succ/Base Err/Base Wrong(mean) Error WSJ 96/50 100/96 51/3.7 71 3.68 Twitter 89/50 98/87 53/13 69 12.59\nIn-Distribution /. AUROC AUPR AUPR Pred. Prob Out-of-Distribution /Base In/Base Out/Base (mean) WSJ/Twitter 80/50 98/92 41/7.7 81 WSJ/Weblog* 61/50 88/86 30/14 93\nIn-Distribution / AUROC AUPR AUPR Pred. Prob Out-of-Distribution /Base In/Base Out/Base (mean) TIMIT/TIMIT+Airport 99/50 99/50 99/50 59 TIMIT/TIMIT+Babble 100/50 100/50 100/50 55 TIMIT/TIMIT+Car 98/50 98/50 98/50 59 TIMIT/TIMIT+Exhibition 100/50 100/50 100/50 57 TIMIT/TIMIT+Restaurant 98/50 98/50 98/50 60 TIMIT/TIMIT+Street 100/50 100/50 100/50 52 TIMIT/TIMIT+Subway 100/50 100/50 100/50 56 TIMIT/TIMIT+TrainS 100/50 100/50 100/50 58 TIMIT/Chinese 85/50 80/34 90/66 64 TIMIT/AII 97/50 79/10 100/90 58\nTable 9: Detecting out-of-distribution distorted speech. All values are percentages\ndistribution detection. Mixing the TIMIT audio with realistic noises from the Aurora-2 datase. (Hirsch & Pearce. 2000). we keep the TIMIT audio volume at 100% and noise volume at 30% giving a mean SNR of approximately 5. Speakers are still clearly audible to the human ear but. confuse the phone recognizer because the prediction edit distance more than doubles. For more out. of-distribution examples, we use the test examples from the THCHS-30 dataset (Wang & Zhang. 2015), a Chinese speech corpus. Table 9 shows the results. Crucially, when performing detection. we compute the softmax probabilities while ignoring the blank symbol's logit. With the blank. symbol's presence, the softmax distributions at most time steps predict a blank symbol with high. confidence, but without the blank symbol we can better differentiate between normal and abnorma distributions. With this modification, the softmax prediction probabilities allow us to detect whethe. an example is out-of-distribution."}, {"section_index": "10", "section_name": "4 ABNORMALITY DETECTION WITH AUXILIARY DECODERS", "section_text": "Having seen that softmax prediction probabilities enable abnormality detection, we now show there is other information sometimes more useful for detection. To demonstrate this, we exploit the learned internal representations of neural networks. We start by training a normal classifier and append an auxiliary decoder which reconstructs the input, shown in Figure 1. Auxiliary decoders are sometimes known to increase classification performance (Zhang et al., 2016). The decoder and scorer are trained jointly on in-distribution examples. Thereafter, the blue layers in Figure 1 are frozen. Then we train red layers on clean and noised training examples, and the sigmoid output of the red layers scores how normal the input is. Consequently, noised examples are in the abnormal class, clean examples are of the normal class, and the sigmoid is trained to output to which class an input belongs. After training we consequently have a normal classifier, an auxiliary decoder, and what we call an abnormality module. The gains from the abnormality module demonstrate there are possible research avenues for outperforming the baseline."}, {"section_index": "11", "section_name": "4.1 TIMIT", "section_text": "We test the abnormality module by revisiting the TIMIT task with a different architecture and show. how these auxiliary components can greatly improve detection. The system is a three-layer, 1024-. neuron wide classifier with an auxiliary decoder and abnormality module. This network takes as input 11 frames and must predict the phone of the center frame, 26 features per frame. Weights are. initialized according to (Hendrycks & Gimpel, 2016c). This network trains for 20 epochs, and the. abnormality module trains for two. The abnormality module sees clean examples and, as negative. examples, TIMIT examples distorted with either white noise, brown noise (noise with its spectral. density proportional to 1/f2), or pink noise (noise with its spectral density proportional to 1/f) at. various volumes.\nWe note that the abnormality module is not trained on the same type of noise added to the test examples. Nonetheless, Table 10 shows that simple noised examples translate to effective detection of realistically distorted audio. We detect abnormal examples by comparing the typical abnormality\nTable 10: Abnormality modules can generalize to novel distortions and detect out-of-distribution examples even when they do not severely degrade accuracy. All values are percentages.\nIn-Distribution /. AUROC AUROC AUPR AUPR AUPR AUPR Out-of-Distribution. /Base /Base In/Base In/Base Out/Base Out/Base Softmax AbMod Softmax AbMod Softmax AbMod MNIST/Omniglot 95/50 100/50 95/52 100/52 95/48 100/48 MNIST/notMNIST 87/50 100/50 88/50 100/50 90/50 100/50 MNIST/CIFAR-10bw 98/50 100/50 98/50 100/50 98/50 100/50 MNIST/Gaussian 88/50 100/50 88/50 100/50 90/50 100/50 MNIST/Uniform 99/50 100/50 99/50 100/50 99/50 100/50 Average 93 100 94 100 94 100\nTable 11: Improved detection using the abnormality module. All values are percentages\nmodule outputs for clean examples with the outputs for the distorted examples. The noises are from Aurora-2 and are added to TIMIT examples with 30% volume. We also use the THCHS-30 dataset for Chinese speech. Unlike before, we use the THCHS-30 training examples rather than test set. examples because fully connected networks can evaluate the whole training set sufficiently quickly It is worth mentioning that fully connected deep neural networks are noise robust (Seltzer et al.,. 2013), yet the abnormality module can still detect whether an example is out-of-distribution. To see why this is remarkable, note that the network's frame classification error is 29.69% on the entire test (not core) dataset, and the average classification error for distorted examples is 30.43%-this is unlike the bidirectional LSTM which had a more pronounced performance decline. Because the classification degradation was only slight, the softmax statistics alone did not provide useful out-. of-distribution detection. In contrast, the abnormality module provided scores which allowed the detection of different-but-similar examples. In practice, it may be important to determine whether an example is out-of-distribution even if it does not greatly confuse the network, and the abnormality module facilitates this.\nFinally, much like in a previous experiment, we train an MNIST classifier with three layers of width 256. This time, we also use an auxiliary decoder and abnormality module rather than relying on only softmax statistics. For abnormal examples we blur, rotate, or add Gaussian noise to training images. Gains from the abnormality module are shown in Table 11, and there is a consistent out-of-sample detection improvement compared to softmax prediction probabilities. Even for highly dissimilar examples the abnormality module can further improve detection..\nIn-Distribution 7 AUROC AUROC AUPR AUPR AUPR AUPR Out-of-Distribution /Base /Base In/Base In/Base Out/Base Out/Base Softmax AbMod Softmax AbMod Softmax AbMod TIMIT/+Airport 75/50 100/50 77/41 100/41 73/59 100/59 TIMIT/+Babble 94/50 100/50 95/41 100/41 91/59 100/59 TIMIT/+Car 70/50 98/50 69/41 98/41 70/59 98/59 TIMIT/+Exhib. 91/50 98/50 92/41 98/41 91/59 98/59 TIMIT/+Rest. 68/50 95/50 70/41 96/41 67/59 95/59 TIMIT/+Subway 76/50 96/50 77/41 96/41 74/59 96/59 TIMIT/+Street 89/50 98/50 91/41 99/41 85/59 98/59 TIMIT/+Train 80/50 100/50 82/41 100/41 77/59 100/59 TIMIT/Chinese 79/50 90/50 41/12 66/12 96/88 98/88 Average 80 97 77 95 80 98"}, {"section_index": "12", "section_name": "DISCUSSION AND FUTURE EWORK", "section_text": "The abnormality module demonstrates that in some cases the baseline can be beaten by exploiting the representations of a network, suggesting myriad research directions. Some promising future avenues may utilize the intra-class variance: if the distance from an example to another of the same predicted class is abnormally high, it may be out-of-distribution (Giryes et al., 2015). Another path is to feed in a vector summarizing a layer's activations into an RNN, one vector for each layer The RNN may determine that the activation patterns are abnormal for out-of-distribution examples Others could make the detections fine-grained: is the out-of-distribution example a known-unknown or an unknown-unknown? A different avenue is not just to detect correct classifications but tc output the probability of a correct detection. These are but a few ideas for improving error anc out-of-distribution detection.\nWe hope that any new detection methods are tested on a variety of tasks and architectures of th. esearcher's choice. A basic demonstration could include the following datasets: MNIST, CIFAR MDB, and tweets because vision-only demonstrations may not transfer well to other architecture. and datasets. Reporting the AUPR and AUROC values is important, and so is the underlying classi. fier's accuracy since an always-wrong classifier gets a maximum AUPR for error detection if erro. is the positive class. Also, future research need not use the exact values from this paper for com. parisons. Machine learning systems evolve, so tethering the evaluations to the exact architecture. and datasets in this paper is needless. Instead, one could simply choose a variety of datasets an. architectures possibly like those above and compare their detection method with a detector based o. the softmax prediction probabilities from their classifiers. These are our basic recommendations fo. others who try to surpass the baseline on this underexplored challenge..\nWe demonstrated a softmax prediction probability baseline for error and out-of-distribution detec. tion across several architectures and numerous datasets. We then presented the abnormality module which provided superior scores for discriminating between normal and abnormal examples on tested cases. The abnormality module demonstrates that the baseline can be beaten in some cases, and this implies there is room for future research. Our hope is that other researchers investigate architec-. tures which make predictions in view of abnormality estimates. and that others pursue more reliable methods for detecting errors and out-of-distribution inputs because knowing when a machine learn-. ing system fails strikes us as highly important.."}, {"section_index": "13", "section_name": "REFERENCES", "section_text": "Ann Bies. Justin Mott. Colin Warner. and Seth Kulick. English Web Treebank. 2012\nYaroslay Bulatoy. notMNIST dataset. 2011\nTom Fawcett. An introduction to ROC analysis. Pattern Recognition Letters, 2005\nWe would like to thank John Wieting. Hao Tang. Karen Livescu, Greg Shakhnarovich, and ou. reviewers for their suggestions. We would also like to thank NVIDIA Corporation for donating several TITAN X GPUs used in this research\nDario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mane. Con crete problems in ai safety. arXiv. 2016.\nJesse Davis and Mark Goadrich. The relationship between precision-recall and ROC curves. In International Conference on Machine Learning (ICML), 2006..\nDan Hendrycks and Kevin Gimpel. Methods for detecting adversarial images and a colorful saliency map. arXiv, 2016a.\nDan Hendrycks and Kevin Gimpel. Bridging nonlinearities and stochastic regularizers with Gaus sian error linear units. arXiv, 2016b.\nDan Hendrycks and Kevin Gimpel. Adjusting for dropout variance in batch normalization and weight initialization. arXiv, 2016c.\nHans-Gunter Hirsch and David Pearce. The Aurora experimental framework for the performance evaluation of speech recognition systems under noisy conditions. ISCA ITRW ASR2000, 2000.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural Computation. 1997\nMinqing Hu and Bing Liu. Mining and Summarizing Customer Reviews. Knowledge Discovery anc Data Mining (KDD), 2004.\nArmand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. Bag of tricks for efficien text classification. arXiv, 2016.\nAlex Krizhevsky. Learning Multiple Layers of Features from Tiny Images, 2009\nBrenden M. Lake, Ruslan Salakhutdinov, and Joshua B. Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 2015.\nDavid D. Lewis, Yiming Yang, Tony G. Rose, and Fan Li. Rcv1: A new benchmark collection fo text categorization research. Journal of Machine Learning Research (JMLR), 2004\nlya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with restarts. arXiv, 2016\nRaja Giryes, Guillermo Sapiro, and Alex M. Bronstein. Deep neural networks with random gaussian weights: A universal classification strategy? arXiv, 2015..\nAlex Graves, Santiago Fernandez, Faustino Gomez, and Jurgen Schmidhuber. Connectionist tem poral classification: Labeling unsegmented sequence data with recurrent neural networks. In International Conference on Machine Learning (ICML), 2006.\nKhanh Nguyen and Brendan O'Connor. Posterior calibration and exploratory analysis for natural language processing models. In Empirical Methods in Natural Language Processing (EMNLP), 2015. Olutobi Owoputi, Brendan O'Connor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A Smith. Improved part-of-speech tagging for online conversational text with word clusters. In North American Chapter of the Association for Computational Linguistics (NAACL), 2013.\nFoster Provost, Tom Fawcett, and Ron Kohavi. The case against accuracy estimation for comparing induction algorithms. In International Conference on Machine Learning (ICML), 1998..\nTakaya Saito and Marc Rehmsmeier. The precision-recall plot is more informative than the RO( plot when evaluating binary classifiers on imbalanced datasets. In PLoS ONE. 2015\nJacob Steinhardt and Percy Liang. Unsupervised risk estimation using only conditional indepen dence structure. In Neural Information Processing Systems (NIPS), 2016..\nDong Wang and Xuewei Zhang. Thchs-30 : A free chinese speech corpus. In Technical Report 2015.\nJianxiong Xiao, James Hays, Krista A. Ehinger, Aude Oliva, and Antonio Torralba. Sun database:. Large-scale scene recognition from abbey to zoo. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2010.\nYuting Zhang, Kibok Lee, and Honglak Lee. Augmenting supervised neural networks with unsu. pervised objectives for large-scale image classification. In International Conference on Machine Learning (ICML), 2016.\nNOAMALIII MIODOLD DAAMIFLD 0.5 0.2 0.3\nFigure 1: A neural network classifying a diamond image with an auxiliary decoder and an abnormal. ity module. Circles are neurons, either having a GELU or sigmoid activation. The blurred diamonc reconstruction precedes subtraction and elementwise squaring. The probability vector is the soft max probability vector. Blue layers train on in-distribution data, and red layers train on both in- anc out-of-distribution examples."}] |
Hk6a8N5xe | [{"section_index": "0", "section_name": "CLASSIFY OR SELECT: NEURAL ARCHITECTURES FOR EXTRACTIVE DOCUMENT SUMMARIZATION", "section_text": "Ramesh Nallapati, Bowen Zhou\nYorktown Heights. NY 10598 USA\nOregon State University"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "We present two novel and contrasting Recurrent Neural Network (RNN) based architectures for extractive summarization of documents. The Classifier based architecture sequentially accepts or rejects each sentence in the original document order for its membership in the final summary. The Selector architecture, on the other hand, is free to pick one sentence at a time in any arbitrary order to piece together the summary."}, {"section_index": "2", "section_name": "1 INTRODUCTION", "section_text": "Document summarization is an important problem that has many applications in information re trieval and natural language understanding. Summarization techniques are mainly classified int two categories: extractive and abstractive. Extractive methods aim to select salient snippets, sen tences or passages from documents, while abstractive summarization techniques aim to concisely paraphrase the information content in the documents.\nA vast majority of the literature on document summarization is devoted to extractive summarization Traditional methods for extractive summarization can be broadly classified into greedy approache (e.g.,Carbonell & Goldstein(1998)), graph based approaches (e.g.,Radev & Erkan(2004)) anc constraint optimization based approaches (e.g.,McDonald(2007)).\nRecently, neural network based approaches have become popular for extractive summarization. For example, Kageback et al.(2014) employed the recursive autoencoder (Socher et al.(2011)) to sum marize documents, producing best performance on the Opinosis dataset (Ganesan et al.(2010)) Yin & Pei|(2015) applied Convolutional Neural Networks (CNN) to project sentences to continuous. vector space and then select sentences by minimizing the cost based on their 'prestige' and 'di- verseness', on the task of multi-document extractive summarization. Another related work is that of Cao et al. (2016), who address the problem of query-focused multi-document summarization using. query-attention-weighted CNNs.\nRecently, with the emergence of strong generative neural models for text Bahdanau et al. (2014) abstractive techniques are also becoming increasingly popular (Rush et al.(2015), Nallapati et al. (2016b) and [Nallapati et al.(2016a)). Despite the emergence of abstractive techniques, extractive techniques are still attractive as they are less complex, less expensive, and generate grammatically\nand semantically correct summaries most of the time. In a very recent work, Cheng & Lapata (2016) proposed an attentional encoder-decoder for extractive single-document summarization and. trained it on Daily Mail corpus, a large news data set, achieving state-of-the-art performance. Like. Cheng & Lapata (2016), our work also focuses only on sentential extractive summarization of single documents using neural networks.\nOur architectures are motivated by two intuitive strategies that humans tend to adopt when they are tasked with extracting salient sentences in a document. The first strategy, which we call Classify involves reading the whole document once to understand its contents, and then traversing through the sentences in the original document order and deciding whether or not each sentence belongs to the summary. The other strategy that we call Select involves memorizing the whole document once as before, and then picking sentences that should belong to the summary one at a time, in any order of one's choosing. Qualitatively, the latter strategy appears to be a better one since it allows us to make globally optimal decisions at each step. While it may be harder for humans to follow this strategy since we are forgetful by nature, one may expect that the Select strategy could deliver an advantage for the machines, since 'forgetfulness' is not a real 'concern' for them. In this work we will explore both the strategies empirically and make a recommendation on which strategy is optimal under what conditions.\nShared Building Blocks: Both architectures begin with word-level bidirectional Gated Recurren. Unit (GRU) based RNNs (Chung et al.[(2014) run independently over each sentence in the docu ment, where each time-step of the RNN corresponds to a word index in the sentence. The average. pooling of the concatenated hidden states of this bidirectional RNN is then used as an input to an. other bidirectional RNN whose time steps correspond to sentence indices in the document. The. concatenated hidden states 'h' from the forward and backward layers of this second layer of bidi rectional RNN at each time step are used as corresponding sentence representations. We also use. the average pooling of the sentence representations as the document representation 'd'. Both ar. chitectures also maintain a dynamic summary representation 's' whose estimation is architecture. dependent. Models under each architecture compute a score for each sentence towards its summary. membership. Motivated by the need to build humanly interpretable models, we compute this score. by explicitly modeling abstract features such as salience, novelty and information content as showr below:\nwhere j is the index of the sentence in the document, p; is the positional embedding of the sentence computed by concatenation of embeddings corresponding to forward and backward position indices of the sentence in the document; cos(a, b) is the standard cosine similarity between the two vectors a and b; Wc and W, are parameter vectors to model content richness and positional importance of sentences respectively; and wc, ws, wp and wr are scalar weights to model relative importance of various abstract features, and are learned automatically. In the equation above, the abstract feature that each term represents is printed against the term in comments. In other words, assuming the importance weights are positive, in order for a sentence to score high for summary membership, i needs to be highly salient, content rich and occupy important positions in the document, while being least redundant with respect to the summary generated till that point. Note that our formulation oi the scoring function simultaneously captures both salience of the sentence h; with respect to the document d as well as its redundancy with respect to the current summary representation ss. In\nBroadly, our Classify architecture involves an RNN based sequence classification model that se-. quentially classifies each sentence into O/1 binary labels, while the Select architecture involves a. generative model that sequentially generates the indices of the sentences that should belong to the. summary. We will first discuss the components shared by both the architectures and then we will present each architecture separately.\n=Wco(W'hj) #(content richness) Uso(cos(hj,d)) #(salience w.r.t. document) +Wp0(WTPj) #(positional importance) r0(c0s(hj,Sj)) # (redundancy w.r.t. summary) +b), # (bias term)\ncore(hj,Sj,d,pj) =wco(Wfhj) #(content richness) +wso(cos(h;, d)) #(salience w.r.t. document) +Wp0(WTPj) #(positional importance) Wr0(cos(hj,Sj)) #(redundancy w.r.t. summary) +6), #(bias term)\nthe next subsection, we will describe the estimation of dynamic summary representation s; and th formulation of the cost function for training in each architecture. We will also present shallow anc deep models under each architecture"}, {"section_index": "3", "section_name": "2.1 CLASSIFIER ARCHITECTURE", "section_text": "In this architecture, we sequentially visit each sentence in the original document order and binary- classify the sentence in terms of whether it belongs to the summary. The probability of the sentence belonging to the summary, P(y; = 1) is given as follows:.\nP(y; = 1|h;,S;,d,p) = o(score(h;,S;, d,pj\nwhere N is the size of the training corpus and Ng is the number of sentences in the document d.. Now the only detail that remains is how the dynamic summary representation s; is estimated. This. is where the shallow and deep models under this architecture differ, and we describe them below.\nAuewwns Auewwns Bneuy Bnnny Absrstet faannres Absrsaet faannres Hider udeees O Rar dep GRU snnnneee Sent. 1 Sent. 2 Sent. 3 Sent. srnnnee Sent. 1 Sent. 2 Sent. 3 Sent. 4 (a) Shallow Classifier Model (b) Deep Classifier Model\nFigure 1: The shallow and deep versions of the Classifier architecture for extractive summarization\nShallow Model: In the shallow model, we estimate the dynamic summary representation as the running sum of the representations of the sentences visited so far weighted by their probability ol being in the summary.\nj-1 Sj hiyi #(training time) i=1 j-1 h;P(yi=1|hi,Si,d) Sj #(test time) i=1\nIn other words. at training time, since the summary membership of sentences is known, the proba bilities are binary, whereas at test time we use a weighted pooling based on the estimated probability.\nP(y; = 1|h;, S;, d, pi) = o(score(h;, S;, d, Pj\nNNd W,w,b) =-ylog P(y=1|h,s,da) + (1- y) log(1-P(y=1|h,s,dd d=1 j=1\nthat each sentence belongs to the summary. There is no need to normalize the summary represen tations since the cosine similarity metric we use in the scoring function of Eq. (1) automaticall normalizes them.\nDeep Model: In the deep model, we introduce an additional layer of unidirectional sentence-level GRU-RNN that takes as input the sentence representations h, at each time-step. The hidden state of. the new GRU h; = GRU(h;) is used as a replacement for sentence representation h, in computing. summary membership scores using Eq. (1) as well as in computing the dynamic summary represen-. tation using Eq. (3). The main idea behind using this additional layer of GRU is to allow a greater. degree of non-linearity in computing the summary representation..\nThe graphical representations of the shallow and deep models under the Classifier architecture are displayed in Figure1with their full set of dependencies.."}, {"section_index": "4", "section_name": "2.2 SELECTOR ARCHITECTURE", "section_text": "In this architecture, the models do not make decisions in the sequence of sentence ordering; instead they pick one sentence at a time in an order that they deem fit. The act of picking a sentence is cast as a sequential generative model in which one sentence-index is emitted at each time step that maximizes the score in Eq. 1 Accordingly, the probability of picking a sentence with index I(j) = k E {1, ..., N} at time-step j is given by the softmax over the scoring function:\nexp(score(hx, S;, d, Pk))\nThe loss function in this case is the negative log-likelihood of the selected sentences in the ground truth data as shown below..\nN Md l(W,w,b) = -log P(I(j)(d)|h1()d), d=1 j=1\nwhere Mg is the number of sentences selected in the ground truth of document d, {I(1)(d), ..., I(Ma)(d)} is the ordered list of selected sentence indices in the ground truth of docu- ment d. The dependence of the loss function on the order of the selected sentences can be gauged by the fact that the probability of selecting a sentence at time step j depends on the dynamic summary representation s, which is estimated based on the all sentences selected up to time step j - 1.\nAt test time, at each time-step, the model emits the index of the sentence that has the best score given the current summary representation as shown below.\nThe estimation of dynamic summary representation is done differently for the shallow and de selector models as described below..\nShallow Model: In this model, we sum the representations of the selected sentences until the time step j as the dynamic summary representation. This is true for both training time and test time.\nj-1 L h1(i) Si i=1\nDeep Model: In the deep model, we introduce an additional GRU-RNN whose time steps corre- spond to the sentence index emission events. At each time-step, it takes as input the representation of the previously selected sentence h1(-1), and computes a new hidden state h; = GRU(h1(j-1)). Unlike the shallow model that maintains a separate vector for summary representation s, we use h, as the summary representation s; at time step j. This makes sense for the case of the Selector architecture since both at training and test time we make hard decisions of sentence selection, with the effect that the hidden state of the new GRU can capture a non-linear aggregation of the sentences selected until time step j - 1.\nI(i) ) = arg max Score(hk, S;, d, Pk. kE{1,...,Nd}.\nseneee snnnnees cenrrneed NULL Sent. ID 1 Sent. ID 2 Sent. ID 3 ^ ^ ^ Sent.1 Softmax Softmax Softmax 1 1 1 Doc. Sent.2 Rep. ee areat errre reat sraaret ssant Sent.3 sentl sent. Sentence & Document Representations Auewwns\nFigure 2: Selector architecture for extractive summarization. The shallow and deep versions are identical except for the fact that the simple vector representation for summary representation in the shallow version is. replaced with a gated recurrent unit in the deep version..\nFig.2| shows the graphical representation of the Selector architecture with all the dependencies between the nodes. The architecture is the same for both shallow and deep models with the only difference being that the simple summary representation in the former is replaced with a gated recurrent unit in the latter.\nPrevious researcher such as Shen et al.(2007) have proposed modeling extractive document sum- marization as a sequence classification problem using Conditional Random Fields. Our approach is different from theirs in the sense that we use RNNs in our model that do not require any handcrafted features for representing sentences and documents.\nThe Selector architecture broadly involves ranking of sentences by some criterion, therefore does. correspond to traditional methods for extractive summarization such as TextRank (Mihalcea & Ta- rau(2004)) that also involve ranking of sentences by salience and novelty. However, to the best of. our knowledge, our Selector framework is a novel deep learning framework for extractive summa-. rization. Broader efforts are being made in the deep learning community to build more sophisticated. sequence to sequence models towards the objective of automatically learning complex tasks such. as sorting sequences (Oriol Vinyals (2015); Graves et al.(2014)), but their utility for extractive. summarization remains to be explored\nIn the deep learning framework, the extractive summarization work of|Cheng & Lapata (2016) is the closest to our work. Their model is based on an encoder-decoder approach where the encoder learns. the representation of sentences and documents while the decoder classifies each sentence using an attention mechanism. Broadly, their model is also in the Classifier framework, but architecturally our approaches are different. While their approach can be termed as a multi-pass approach where. both the encoder and decoder consume the same sentence representations, our approach is a deep. one where the representations learned by the bidirectional GRU encoder are in turn consumed by the Classifier or Selector models. Another key difference between our work and theirs is that unlike ou. unsupervised greedy approach to convert abstractive summaries to extractive labels,Cheng & Lapata. (2016) chose to train a separate supervised classifier using manually created labels on a subset of the. data. This may yield more accurate gold extractive labels which may help boost the performance of their models, but incurs additional annotation costs..\nPseudo ground-truth generation: In order to train our extractive Classifier and Selector models, for. each document we need ground truth in the form of sentence-level binary labels and ordered list of selected sentences respectively. However, most summarization corpora only contain human written abstractive summaries as ground truth. To solve this problem, we use an unsupervised approach to. convert the abstractive summaries to extractive labels. Our approach is based on the idea that the. selected sentences from the document should be the ones that maximize the Rouge score with respect to gold abstractive summaries. Since it is computationally expensive to find a globally optimal subset of sentences that maximizes the Rouge score, we employ a greedy approach, where we add. one sentence at a time incrementally to the summary, such that the Rouge score of the current set of selected sentences is maximized with respect to the entire gold summary. We stop adding sentences when either none of the remaining candidate sentences improves the Rouge score upon addition to. the current summary set or when the maximum summary length is reached. We return this ordered list of sentences as the ground-truth for the Selector architecture. The ordered list is converted into binary summary-membership labels that are consumed by the Classifier architecture for training..\nWe note that similar approaches have been employed by other researchers such as Svore et al.(2007) to handle the problem of converting abstractive summaries to extractive ground truth. We would also like to point readers to a recent work byCao et al.(2015) that proposes an ILP based approach to. solve this problem optimally. Since this is not the focus of this work, we chose a simple greedy. algorithm.\nCorpora: For our experiments, we used the Daily Mail corpus originally constructed byHermann. et al.(2015) for the task of passage-based question answering, and re-purposed for the task of. document summarization as proposed in Cheng & Lapata(2016) for extractive summarization and Nallapati et al.(2016a) for abstractive summarization. Overall, we have 196,557 training documents, 12,147 validation documents and 10,396 test documents from the Daily Mail corpus. On average,. there are about 28 sentences per document in the training set, and an average of 3-4 sentences in the reference summaries. The average word count per document in the training set is 802.\nWe also used the DUC 2002 single-document summarization datasef' |consisting of 567 document as an additional out-of-domain test set to evaluate our models\nEvaluation: In our experiments below, we evaluate the performance of our models using differen. variants of the Rouge metrid2 computed with respect to the gold abstractive summaries. Followin, Cheng & Lapata|(2016), we use limited length Rouge recall at 75 bytes of summary as well as 27. bytes on the Daily Mail corpus. On DUC 2002 corpus, following the official guidelines, we us limited length Rouge recall at 75 words. We report the scores from Rouge-1, Rouge-2 and Rouge L, which are computed using matches of unigrams, bigrams and longest common subsequence respectively, with the ground truth summaries..\nBaselines: On all datasets, we use Lead-3 model, which simply produces the leading three sen tences of the document as the summary, as a baseline. On the Daily Mail and DUC 2002 corpora. we also report performance of LReg, a feature-rich logistic classifier used as a baseline by|Cheng & Lapata(2016). On DUC 2002 corpus, we report several baselines such as Integer Linear Program- ming based approach (Woodsend & Lapata(2010)), and graph based approaches such as TGRAPH (Parveen et al.(2015)) and URANK (Wan(2010)) which achieve very high performance on this corpus. In addition, we also compare with the state-of-the art deep learning supervised extractive. model fromCheng & Lapata(2016).\nExperimental Settings: We used 100-dimensional word2vec (Mikolov et al.(2013)) embeddings trained on the Daily Mail corpus as our embedding initialization. We limited the vocabulary size to 150K and the maximum sentence length to 50 words, to speed up computation. We fixed the model hidden state size at 200. We used a batch size of 32 at training time, and employed adadelta (Zeiler (2012)) to train our model. We employed gradient clipping and L-2 regularization to prevent overfitting and an early stopping criterion based on validation cost..\nhttp://www-nlpir.nist.gov/projects/duc/guidelines/2002.html\nhttp://www.berouge.com/Pages/default.aspx\nAt test time, for the Classifier models we pick sentences sorted by the predicted probabilities until w exceed the length limit as determined by the Rouge metric. Likewise, we allow the Selector model to emit sentence indices until the desired summary length is reached. For the Selector model, w also make sure the emitted sentence ids are not repeated across time steps by traversing down th sorted predicted probabilities of the softmax layer at each time step until we reach a sentence-i that was not emitted before.\nWe note that it is possible to optimize the Classifier performance at test time using the Viterbi algorithm to compute the best sequence of labels, subject to the Markovian assumptions of the architecture and model. Similarly, it is also possible to further boost the Selector's performance by using beam search at test time. However, in this work we used greedy classification/selection for inference since our primary interest is in comparing the two architectures, and our choice allows us to make a fair apples-to-apples comparison.\nResults on Daily Mail corpus: Table [1 shows the performance comparison of our models witl state-of-the-art model of|Cheng & Lapata[(2016) and other baselines on the DailyMail corpus using Rouge recall at two different summary lengths..\nModel Recall at 75 bytes Recall at 275 bytes Rouge-1 Rouge-2 Rouge-L Rouge-1 Rouge-2 Rouge-L Lead-3 21.9 7.2 11.6 40.5 14.9 32.6 LReg(500) 18.5 6.9 10.2 N/A N/A N/A Cheng '16 22.7 8.5 12.5 42.2 17.3* 34.8 Shal.-Select 25.6 10.3 14.0 41.3 16.8 34.9 Deep-Select 26.1 10.7 14.4 41.3 15.3 33.5 Shal.-Cls. 26.0 10.5 14.23 42.1 16.8 34.8 Deep-Cls. 26.2* 0.4 10.7* 0.4 14.4* 0.4 42.2 0.2 16.8 0.2 35.0 0.2\nThe results show that contrary to our initial expectation, the Classifier architecture is superior to the Selector architecture. Within each architecture, the deeper models are better performing than the shallower ones. Our deep classifier model outperforms |Cheng & Lapata (2016) with a statistically significant margin at 75 bytes, while matching their model at 275 bytes. One potential reason our models do not consistently outperform the extractive model of Cheng & Lapata (2016) is the ad ditional supervised training they used to create sentence-level extractive labels to train their model Our models instead use an unsupervised greedy approximation to create extractive labels from ab- stractive summaries, and as a result, may generate noisier ground truth than theirs.\nResults on the Out-of-Domain DUC 2002 corpus: We also evaluated the models trained on the. DailyMail corpus on the out-of-domain DUC 2002 set as shown in Table2] The performance trend. is similar to that on Daily Mail. Our best model, Deep Classifier is again statistically on par with the mode1 of|Cheng & Lapata|(2016). However, both models perform worse than graph-based TGRAPH (Parveen et al.(2015)) and URANK (Wan(2010) algorithms, which are the state-of-the-art models on this corpus. Deep learning based supervised models such as ours and that of Cheng & Lapata. [2016) perform very well on the domain they are trained on, but may suffer from domain adaptatior issues when tested on a different corpus such as DUC 2002..\nImpact of Document Structure: In all our experiments thus far, the classifier architecture has. proven superior to the selector architecture. We conjecture that decision making in the same se-. quence as the original sentence ordering is perhaps advantageous in document summarization since. there is a smooth sequential discourse structure in news stories starting with the main highlights of the story in the beginning, more elaborate description in the middle and ending with conclusive. remarks. If this is true, then in scenarios where sentence ordering is less structured, the selector.\nTable 1: Performance of various models on the entire Daily Mail test set using the limited length recall. variants of Rouge with respect to the abstractive ground truth at 75 bytes and 275 bytes. Entries with asterisk are statistically significant using 95% confidence interval with respect to the nearest state-of-the-art model, as estimated by the Rouge script.\nTable 2: Performance of various models on the DUC 2o02 set using the limited length recall variants oi Rouge at 75 words. Our Deep Classifier is statistically within the margin of error at 95% C.I. with respect to the model of|Cheng & Lapata(2016), but both are lower than state-of-the-art results due to out-of-domain training.\narchitecture should be superior since it has freedom to select salient sentences in any arbitrary or. der. Such scenarios actually do occur in practice, e.g., summarization of a cluster of tweets on a. topic where there is no specific discourse structure between individual tweets, or in multi-document. summarization where a pair of sentences across document boundaries have no specific ordering. In order to test this hypothesis, we simulated such data in the Daily Mail corpus by randomly shuffling. the sentences in each document in the training set and retraining models under both the architec-. tures, and evaluating them on the original test sets. The results, summarized in Table[3] show that. the Classifier architecture suffers bigger losses than the Selector architecture when the document. structure is destroyed. In fact, the Selector architecture performs slightly better than the Classifier architecture when trained on the shuffled data, indicating that our hypothesis may indeed be true..\nTable 3: Simulated experiment to demonstrate the impact of document discourse structure on model perfor mance. Evaluation is done using Rouge limited length recall at 275 bytes. The Selector architecture exhibits superior performance when the discourse structure of the document is destroyed\nQualitative Analysis: One of the advantages of our model design is teasing out various abstrac features for the sake of interpretability of system predictions. In the appendix, we present a visual ization (see Fig. 3lin the Appendix) of the system predictions based on the scores for various abstract features listed in Eq. (1). We also present the learned importance weights of these features in Table 4A few representative documents are also presented in the appendix highlighting the sentences chosen by our models for summarization.\nIn this work, we propose two neural architectures for extractive summarization. Our proposed mod els under these architectures are not only very interpretable, but also achieve state-of-the-art perfor mance on two different data sets. We also empirically compare our two frameworks and suggest conditions under which each of them can deliver optimal performance.\nAs part of our future work, we plan to further investigate the applicability of the novel Selector ar chitecture to relatively less structured summarization problems such as summarization of multiple documents or topical clusters of tweets. In addition, we also intend to perform additional experi- ments on the Daily Mail dataset such as incorporating beam search in both model inference as well in pseudo ground truth generation that may result in further performance improvements.\nRouge-1 Rouge-2 Rouge-L Lead-3 43.6 21.0 40.2 LReg 43.8 20.7 40.3 ILP 45.4 21.3 42.8 TGRAPH 48.1 24.3* URANK 48.5* 21.5 Cheng et al '16 47.4 23.0 43.5 Shallow-Selector 44.6 20.0 41.1 Deep-Selector 45.9 21.5 42.4 Shallow-Classifier 45.9 21.5 42.3 Deep-Classifier 46.8 0.9 22.6 0.9 43.1 0.9"}, {"section_index": "5", "section_name": "REFERENCES", "section_text": "Ziqiang Cao, Wenjie Li, Sujian Li, and Furu Wei. Attsum: Joint learning of focusing and summa rization with neural attention. arXiv preprint arXiv:1604.00125, 2016..\nKavita Ganesan, ChengXiang Zhai, and Jiawei Han. Opinosis: a graph-based approach to abstractive summarization of highly redundant opinions. In Proceedings of the 23rd international conference on computational linguistics. pp. 340-348. Association for Computational Linguistics. 2010\nMikael Kageback, Olof Mogren, Nina Tahmasebi, and Devdatt Dubhashi. Extractive summarization using continuous vector space models. pp. 31-39. 2014.\nRyan McDonald. A study of global inference algorithms in multi-document summarization. pp 557-564. 2007.\nDragomir Radev and Gunes Erkan. Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of Artificial Intelligence Research, pp. 457-479, 2004.\nDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. 2014\nTomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed represen tations of words and phrases and their compositionality. In Advances in neural information pro cessing systems, pp. 3111-3119, 2013.\nRamesh Nallapati, Bowen Zhou, and Bing Xiang. Sequence-to-sequence rnns for text summariza ti01 1f WO1 ack. 2016b Int nin( ti0\nManjunath Kudlur Oriol Vinyals, Samy Bengio. Order matters: Sequence to sequence for sets nternationalConf ntations. 2015\nDou Shen, Jian-Tao Sun, Hua Li, Qiang Yang, and Zheng Chen. Document summarization using conditional random fields. In Proceedings of IJCAI, 2007..\nKrysta M. Svore, Lucy Vanderwende, and Christopher J.C. Burges. Enhancing single-document summarization by combining ranknet and third-party sources. In Proceedings of the Joint Con. ference on Empirical Methods in Natural Language Processing and Computational Natural Lan. guage Learning, pp. 448-457, 2007.\nXiaojun Wan. Towards a unified approach to simultaneous single-document and multidocument summarizations. In In Proceedings of the 23rd COLING, pp. 11371145. 2010.\nKristian Woodsend and Mirella Lapata. Automatic generation of story highlights. In In Proceeding. of the 48th ACL, pp. 565574, 2010"}, {"section_index": "6", "section_name": "7 APPENDIX", "section_text": "In this section, we will present some additional qualitative and quantitative analysis of our models that we hope will shed some light on their behavior.."}, {"section_index": "7", "section_name": "7.1 VISUALIZATION OF MODEL OUTPUT", "section_text": "In addition to being state-of-the-art performers, our models have the additional advantage of being very interpretable. The clearly separated terms in the scoring function (see Eqn.1) allow us to tease out various factors responsible for the classification/selection of each sentence. This is illustrated in Figure[3] where we display a representative document from our validation set along with normalized scores from each abstract feature from the deep classifier model. Such visualization is especially useful in explaining to the end-user the decisions made by the system."}, {"section_index": "8", "section_name": "7.2 LEARNED IMPORTANCE WEIGHTS", "section_text": "We display in Table 4 the learned importance weights corresponding to various abstract features for deep sentence selector. Confirming our intuition, the model learns that salience and redundancy are the most important predictive features for summary membership of a sentence, followed by positional feature and content based feature. Further, when the same model is trained on documents with randomly shuffled sentences, it learns very small weight for the positional features, which is exactly what one expects.\nRichard Socher, Eric H. Huang, Jeffrey Pennin, Christopher D. Manning, and Andrew Y. Ng. Dy. namic pooling and unfolding recursive autoencoders for paraphrase detection. pp. 801-809. 2011\nTable 4: Learned weights of various abstract features from the deep sentence selector model. Salience and redundancy are the most important features as learned by the model, followed by position and content. The negative sign for position weights has no particular significance. The positional feature gets very low weight when the document structure is destroyed by randomly shuffling sentences in each document the training data.\nGold Summary: Redpath has ended his eight-year association with Sale Sharks. Redpath spent five years as a player and three as a Salience ContentNoveltyPositionProb. coach at sale. He has thanked the owners, coaches and players for their support. Bryan Redpath has left his coaching role at Sale Sharks with 0.1 0.1 0.9 0.1 0.3 immediate effect. The 43 - year - old Scot ends an eight-year association with 0.9 0.6 0.9 0.9 0.7 the Aviva Premiership side, having spent five years with them as a player and three as a coach. Redpath returned to Sale in June 2012 as director of rugby 0.8 0.5 0.5 0.9 0.6 after starting a coaching career at Gloucester and progressing to the top job at Kingsholm . Redpath spent five years with Sale Sharks as a player and a 0.8 0.9 0.7 0.8 0.9 further three as a coach but with Sale Sharks struggling four months into Redpath's tenure, he was removed from the director of rugby role at the Salford-based side and has since been operating as head coach . 'I would like to thank the owners, coaches, players and staff 0.4 0.1 0.1 0.7 0.2 for all their help and support since I returned to the club in 2012. Also to the supporters who have been great with me both as 0.6 0.0 0.2 0.3 0.2 a player and as a coach,' Redpath said.\nFigure 3: Visualization of Deep Classifier output on a representative document. Each row is a sentence ir the document, while the shading-color intensity in the first column is proportional to its probability of being ir the summary, as estimated by the scoring function. In the columns are the normalized scores from each of the abstract features in Eqn. (1) as well as the final prediction probability (last column). Sentence 2 is estimated tc be the most salient, while the longest one, sentence 4, is considered the most content-rich, and not surprisingly the first sentence the most novel. The third sentence gets the best position based score."}, {"section_index": "9", "section_name": "7.3 ABLATION EXPERIMENTS", "section_text": "We evaluated the performance of the deep selector and deep classifier models on the validation set by deleting one abstract feature at a time from the model, with replacement. The performance numbers, displayed in Table 5] show that removing any of the features results in a small loss in performance. Note that the priority of features in the ablation experiments need not correspond to. their priority in terms of learned weights in Table 4] since feature correlations may affect the two metrics differently. For the deep classifier, content and redundancy seem to matter the most while. for the deep selector, dropping positional features hurts the most. Based on this analysis, we plan. to investigate more thoroughly the reasons behind the poor ablation performance of salience and redundancy in the classifier and selector models respectively..\nDeep Classifier Deep Selector Features Rouge-1 Rouge-2 Rouge-L Rouge-1 Rouge-2 Rouge-L All 42.43 17.32 34.07 41.55 16.52 32.41 -Salience 42.40 17.27 34.09 40.82 15.99 31.45 -Position 41.78 16.76 33.58 39.06 14.32 29.85 -Contents 41.12 15.78 33.23 40.68 15.83 31.13 -Redundancy 41.67 16.86 32.93 41.46 16.50 32.31\nTable 5: Ablation experiments on the validation set to gauge the relative importance of each abstract feature. The top row is where all four abstract features are present. The following rows corresponding to removal of one feature at a time with replacement. Evaluation is done using Rouge limited length recall at 275 bytes. Bold faced entries correspond to largest reduction in performance when the corresponding features are dropped."}, {"section_index": "10", "section_name": "7.4 REPRESENTATIVE DOCUMENTS AND EXTRACTIVE SUMMARIES", "section_text": "We display a couple of representative documents, one each from the Daily Mail and DUC corpora highlighting the sentences chosen by deep classifier and comparing them with the gold summaries in Table[6] The examples demonstrate qualitatively that the model performs a reasonably good job in identifying the key messages of a document.\nDocument: @entity0 have an interest in @entity3 defender @entity2 but are unlikely to make a move until january . the 00 - year - old @entity6 captain has yet to open talks over a new contract at @entity3 and his current deal runs out in 0oo0\nDocument: today , the foreign ministry said that control opera tions carried out by the corvette spiro against a korean-flagged as received ship fishing illegally in argentine waters were car ried out \" in accordance with international law and in coordi nation with the foreign ministry \"'. the foreign ministry thus ap proved the intervention by the argentine corvette when it discovered the korean ship chin yuan hsing violating argentine jurisdictional waters on O0 may . ... the korean ship , which had been fishing illegally in argentine waters , was sunk by its own crew after failing to answer to the argentine ship 's warnings . the crew was transferred to the chin chuan hsing , which was sailing nearby anc approached to rescue the crew of the sinking ship .. Gold Summary: the korean-flagged fishing vessel chin yuan hs ing was scuttled in waters off argentina on 00 may Ooo0 . adverse weather conditions prevailed when the argentine corvette spiro spot ted the korean ship fishing illegally in restricted argentine waters the korean vessel did not respond to the corvette 's warning . insteac , the korean crew sank their ship , and transferred to another korean ship sailing nearby . in accordance with a uk-argentine agreement the argentine navy turned the surveillance of the second korean ves sel over to the british when it approached within 00 nautical miles\nTable 6: Example documents and gold summaries from Daily Mail (top) and DUC 2002 (bottom) corpora The sentences chosen by deep classifier for extractive summarization are highlighted in bold.."}] |
HkcdHtqlx | [{"section_index": "0", "section_name": "GATED-ATTENTION READERS FOI TEXT COMPREHENSION", "section_text": "{bdhingra, hanxiaol, zhiliny, wcohen, rsalakhu}@cs. cmu.edu"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "A recent trend to measure progress towards machine reading is to test a system's ability to answer questions about a document it has to comprehend. Towards this end, several large-scale datasets of cloze-style questions over a context document have been introduced recently, which allow the train ing of supervised machine learning systems (Hermann et al., 2015; Hill et al., 2015; Onishi et al., 2016). Such datasets can be easily constructed automatically and the unambiguous nature of their queries provides an objective benchmark to measure a system's performance at text comprehension.\nDeep learning models have recently been shown to outperform traditional shallow approaches on. text comprehension tasks (Hermann et al., 2015). The success of many recent models can be at-. tributed primarily to two factors: (1) Multi-hop architectures allow a (Weston et al., 2014; Sordoni et al., 2016; Shen et al., 2016), model to scan the document and the question iteratively for multiple passes. (2) Attention mechanisms, (Weston et al., 2014; Chen et al., 2016; Hermann et al., 2015). borrowed from the machine translation literature (Bahdanau et al., 2014), allow the model to fo-. cus on appropriate subparts of the context document. Intuitively, the multi-hop architecture allows. the reader to incrementally refine token representations, and the attention mechanism re-weights. different parts in the document according to their relevance to the query..\nThe effectiveness of multi-hop reasoning and attentions have been explored orthogonally so far i1. the literature. In this paper, we focus on combining both in a complementary manner, by design. ing a novel attention mechanism which gates the evolving token representations across hops. Mor. specifically, unlike existing models where the query attention is applied either token-wise (Hermant et al., 2015; Kadlec et al., 2016; Chen et al., 2016; Hill et al., 2015) or sentence-wise (Weston et al. 2014; Sukhbaatar et al., 2015) to allow weighted aggregation, the Gated-Attention (GA) module. proposed in this work allows the query to directly interact with each dimension of the token em. beddings at the semantic-level, and is applied layer-wise as information filters during the multi-hoj representation learning process. Such a fine-grained attention enables our model to learn conditiona. token representations with respect to the given question, leading to accurate answer selections..\nWe show in our experiments that the proposed GA reader, despite its relative simplicity, consistently improves over a variety of strong baselines on three benchmark datasets'. Our key contribution"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In this paper we study the problem of answering cloze-style questions over doc uments. Our model, the Gated-Attention (GA) Reader, integrates a multi-hop ar- chitecture with a novel attention mechanism, which is based on multiplicative in- teractions between the query embedding and the intermediate states of a recurrent neural network document reader. This enables the reader to build query-specific representations of tokens in the document for accurate answer selection. The GA Reader obtains state-of-the-art results on three benchmarks for this task-the CNN & Daily Mail news stories and the Who Did What dataset. The effectiveness of multiplicative interaction is demonstrated by an ablation study, and by comparing to alternative compositional operators for implementing the gated-attention.\nthe GA module, provides a significant improvement when the dataset size is large. Qualitatively visualization of the attentions at intermediate layers of the GA reader shows that in each layer th GA reader attends to distinct salient aspects of the query which help in determining the answer."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "The cloze-style QA task involves tuples of the form (d, q, a, C), where d is a document (context), q is a query over the contents of d, in which a phrase is replaced with a placeholder, and a is the answer to q, which comes from a set of candidates C. In this work we consider datasets where each candidate c E C has at least one token which also appears in the document. The task can then be described as: given a document-query pair (d, q), find a E C which answers q. Below we provide an overview of representative neural network architectures which have been applied to this problem.\nLSTMs with Attention: Several architectures introduced in (Hermann et al., 2015) employ LSTM units to compute a combined document-query representation g(d, q), which is used to rank the can didate answers. Their techniques include the DeepLSTM Reader which performs a single forwarc pass through the concatenated (document, query) pair to obtain g(d, q); the Attentive Reader which first computes a document vector d(q) by a weighted aggregation of words according to attentions based on q, and then combines d(q) and q to obtain their joint representation g(d(q), q); and the Impatient Reader where the document representation is built incrementally. The architecture o the Attentive Reader has been simplified recently in Stanford Attentive Reader, where shallowe recurrent units were used with a bilinear form for the query-document attention (Chen et al., 2016)\nAttention Sum: The Attention-Sum (AS) Reader (Kadlec et al., 2016) uses two bi-directional GRt. networks (Cho et al., 2014) to encode both d and q into vectors, similar to Stanford AR. A prob. ability distribution over the entities in d is obtained by computing dot products between q and the. entity embeddings and taking a softmax. An aggregation scheme named pointer-sum attention is. further applied to sum the probabilities of the same entity, so that frequent entities the documen will be favored compared to rare ones. Building on the AS Reader, the Attention-over-Attentior. (AoA) Reader (Cui et al., 2016) introduces a two-way attention mechanism where the query anc the document are mutually attentive to each other.\nMulit-hop Architectures: Memory Networks (MemNets) were proposed in (Weston et al., 2014) where each sentence in the document is encoded to a memory by aggregating nearby words. At tention over the memory slots given the query is used to compute an overall memory and to renev the query representation over multiple iterations, allowing certain types of reasoning over the salien facts in the memory and the query. Neural Semantic Encoders (NSE) (Munkhdalai & Yu, 2016a extended MemNets by introducing a write operation which can evolve the memory over time durin the course of reading. Iterative reasoning has been found effective in several more recent models including the Iterative Attentive Reader (Sordoni et al., 2016) and ReasoNet (Shen et al., 2016) The latter allows a dynamic number of reasoning steps and is trained with reinforcement learning.\nOther related works include Dynamic Entity Representation network (DER) (Kobayashi et al 2016), which builds dynamic representations of the candidate answers while reading the document and accumulates the information about an entity by max-pooling. EpiReader (Trischler et al., 2016 consists of two networks, where one proposes a small set of candidate answers, and the other rerank the proposed candidates conditioned on the query and the context. (Bajgar et al., 2016) showed a 10% improvement on the CBT corpus (Hill et al., 2015) by training the AS Reader on an augmentec training set of about 14 million examples, making a case for community to exploit data abundance The focus of this paper, however, is on designing models which exploit the available data efficiently\nOur proposed GA readers perform multiple hops over the document (context), similar to the Memory Networks architecture (Sukhbaatar et al., 2015). Multi-hop architectures mimic the multi-step com- prehension process of human readers, and have shown promising results in several recent models for text comprehension (Sordoni et al., 2016; Kumar et al., 2015; Shen et al., 2016). The contextual representations in GA readers, namely the embeddings of words in the document, are iteratively\nrefined across hops until reaching a final attention-sum module (Kadlec et al., 2016) which maps the contextual representations in the last hop to a probability distribution over candidate answers..\nThe attention mechanism has been introduced recently to model human focus, leading to signifi. cant improvement in machine translation and image captioning (Bahdanau et al., 2014; Mnih et al. 2014). In reading comprehension tasks, ideally, the semantic meanings carried by the contextual. embeddings should be aware of the query across hops. As an example, human readers are able to keep the question in mind during multiple passes of reading, to successively mask away informa tion irrelevant to the query. However, existing neural network readers are restricted to either attend to tokens (Hermann et al., 2015; Chen et al., 2016) or entire sentences (Weston et al., 2014), with. the assumption that certain sub-parts of the document are more important than others. In contrast. we propose a finer-grained model which attends to components of the semantic representation being. built up by the GRU. The new attention mechanism, called gated-attention, is implemented based on. multiplicative interactions between the query and the contextual embeddings, and is applied per hop to act as fine-grained information filters during the multi-step reasoning. The filters weigh individual components of the vector representation of each token in the document separately..\nThe design of gated-attention layers is motivated by the effectiveness of multiplicative interaction among vector-space representations, e.g., in various types of recurrent units (Hochreiter & Schmid-. huber, 1997; Wu et al., 2016) and in relational learning (Yang et al., 2014; Kiros et al., 2014). While other types of compositional operators are possible, such as concatenation or addition (Mitchell & Lapata, 2008), we find that multiplication has strong empirical performance (section 4.4). Intu- itively, multiplicative interaction e O q between two word embeddings e and q adjusts the semantic meaning of e towards q, keeping the compositionality of the original embeddings preserved.2."}, {"section_index": "4", "section_name": "3.2 MODEL DETAILS", "section_text": "Several components of the model use a Gated Recurrent Unit (GRU) (Cho et al., 2014) which maps an input sequence X = x1 CT to an ouput seauence H = Ih1. h.. :.ht as follows:\nZt=o(Wzxt+Uzht-1+bz) ht = tanh(Wnxt + Un(rt O ht-1) + bn) ht=(1-Zt)O ht-1+ztO ht\nwhere O denotes the Hadamard product or the element-wise multiplication. rt and zt are called the reset and update gates respectively, and ht the candidate output. A Bi-directional GRU (Bi-. GRU) processes the sequence in both forward and backward directions to produce two sequences. hf, h2, ..., h] and [hi, h2, ..., h], which are concatenated at the output.\n(k) D(k) =GRUp\nGRU(X) = [hf l|hT,..., hfl|h]\nFigure 1 illustrates the Gated-Attention (GA) reader. The model reads the document and the query over K horizontal layers, where layer k receives the contextual embeddings X(k-1) of the document from the previous layer. The document embeddings are transformed by taking the full output of a document Bi-GRU (indicated in blue in Figure 1):\nFigure 1: Gated-Attention Reader. Dashed lines re resent dropout connections\nwhere GA is defined in the following subsection"}, {"section_index": "5", "section_name": "3.2.2 GATED-ATTENTION MODULE", "section_text": "Q; = softmax(Q' d) qi = Qai Xi=diO qi\nIn equation (6) we use the multiplication operator to model the interactions between d; and q. In the experiments section, we also report results for other choices of gating functions, including addition x; = d; + qi and concatenation x; = diqi"}, {"section_index": "6", "section_name": "3.2.3 ANSWER PREDICTION", "section_text": "X visited prague Embed (query) Bi-GRU Bi-GRU Bi-GRU Q(1) 0(2) (K) qi 1 (2) K x1 X d1 S1 Obama GA GA <,> x2 (1) (2 x2 S2 GA GA <,> sonnax met Enqep BiRU Y 19-! .. \". . P(Obama|d, q) : (1) XT (2) XT ST prague GA GA <,> (document) K Layers\nAt the same time, a layer-specific query representation is computed as the full output of a separate query Bi-GRU (indicated in green in Figure 1):\nk = GRU\nX(k) = GA(D(k)\nFor brevity, let us drop the superscript k in this subsection as we are focusing on a particular layer For each token d, in D, the GA module forms a token-specific representation of the query qi using soft attention, and then multiplies the query representation element-wise with the document token representation. Specifically, for i = 1, . ... D:\nl of the cloze token in the query, and D(K) = GRUp '(X(K-1)) be the full output of final layer. document Bi-GRU. To obtain the probability that a particular token in the document answers the query, we take an inner-product between these two, and pass through a softmax layer:.\n(K)TD(K) = softmax((q)) S\nwhere vector s defines a probability distribution over the |D| tokens in the document. The probability of a particular candidate c E C as being the answer is then computed by aggregating the probabilities of all document tokens which appear in c and renormalizing over the candidates:.\nPr(c[d,q) x L Si iEI(c,d)\nTable 1: Dataset statistics\nwhere I(c, d) is the set of positions where a token in c appears in the document d. This aggregatio. operation is the same as the pointer sum attention applied in the AS Reader (Kadlec et al., 2016).\nFinally, the candidate with maximum probability is selected as the predicted answer:\nDuring the training phase, model parameters of the GA reader are updated w.r.t. a cross-entropy loss between the predicted probabilities and the true answers."}, {"section_index": "7", "section_name": "3.2.4 FURTHER ENHANCEMENTS", "section_text": "Character-level Embeddings: Given a token w from the document or query, its vector space rep-. resentation is computed as x = L(w)||C(w). L(w) retrieves the word-embedding for w from a lookup table L E R|V|nt, whose rows hold a vector for each unique token in the vocabulary. We. also utilize a character composition model C(w) which generates an orthographic embedding of. the token. Such embeddings have been previously shown to be helpful for tasks like Named Entity. Recognition (Yang et al., 2016) and dealing with OOV tokens at test time (Dhingra et al., 2016). The embedding C(w) is generated by taking the final outputs zf. and zb. of a Bi-GRU applied to. embeddings from a lookup table of characters in the token, and applying a linear transformation:\nC(w)=Wz+b"}, {"section_index": "8", "section_name": "4.1 DATASETS", "section_text": "We evaluate the GA reader on five large-scale datasets recently proposed in the literature. The first two, CNN and Daily Mail news stories' consist of articles from the popular CNN and Daily Mail websites (Hermann et al., 2015). A query over each article is formed by removing an entity from the short summary which follows the article. Further, entities within each article were anonymized to make the task purely a comprehension one. N-gram statistics, for instance, computed over the entire corpus are no longer useful in such an anonymized corpus.\nThe next two datasets are formed from two different subsets of the Children's Book Test (CBT) (Hill et al., 2015). Documents consist of 20 contiguous sentences from the body of a popular chil dren's book, and queries are formed by deleting a token from the 21st sentence. We only focus on\nCNN Daily Mail CBT-NE CBT-CN WDW-Strict WDW-Relaxed # train 380,298 879,450 108,719 120,769 127,786 185,978 # validation 3,924 64,835 2,000 2,000 10,000 10,000 # test 3,198 53,182 2,500 2,500 10,000 10,000 # vocab 118,497 208,045 53,063 53,185 347,406 308,602 max doc length 2,000 2,000 1,338 1,338 3,085 3,085\n= argmaxcec a Pr(c|d, q).\nQuestion Evidence Common Word Feature (ge-comm): (Li et al., 2016) recently proposed a simple token level indicator feature which significantly boosts reading comprehension performance in some cases. For each token in the document we construct a one-hot vector fi E {0, 1}2 indicating whether that token is present in the query or not. It can be incorporated into the GA reader by assigning a feature lookup table F E Rnr2 (we use nF = 2), taking the feature embedding e, = fT F and appending it to the inputs of the last layer document BiGRU as, x,A) several experiments both with and without this feature and observed some interesting trends, which are discussed below. Henceforth, we refer to this feature as the ge-comm feature or just feature.\nTable 2: Hyperparameter settings for each dataset. dimQ indicates hidden state size of GRU.\nHyperparameter CNN Daily Mail CBT-NE CBT-CN WDW-Strict WDW-Relaxed Dropout 0.2 0.1 0.4 0.4 0.3 0.3 dim(GRU*) 256 256 128 128 128 128\nsubsets where the deleted token is either a common noun (CN) or named entity (NE) since simple language models already give human-level performance on the other types (cf. (Hill et al., 2015)).\nThe final dataset we evaluate on is Who Did What (WDW) (Onishi et al.. 2016). constructed fron the LDC English Gigaword newswire corpus. First, article pairs which appeared around the same. time and with overlapping entities are chosen, and then one article forms the document and a cloze. query is constructed from the other. Missing tokens are always person named entities. Questions. which are easily answered by simple baselines are filtered out, to make the task more challenging. There are two versions of the training set---a small but focused \"Strict' version and a large but noisy \"Relaxed\"' version. We report results on both settings which share the same validation and test sets.. Statistics of all the datasets used in our experiments are summarized in Table 1.."}, {"section_index": "9", "section_name": "4.2 IMPLEMENTATION DETAILS", "section_text": "Our model was implemented using the Theano (Theano Development Team, 2016) and Lasagne Python libraries. We used stochastic gradient descent with ADAM updates for optimization, whicl combines classical momentum and adaptive gradients (Kingma & Ba, 2014). The batch size was 32 and the initial learning rate was 5 10-4 which was halved every epoch after the second epoch. The same setting is applied to all models and datasets. We also used gradient clipping with a thresholc of 10 to stabilize GRU training (Pascanu et al., 2012). We set the number of layers K to be 3 fo all experiments, and provide further analysis below. The number of hidden units for the characte. GRU was set to 50. The remaining two hyperparameters-size of document and query GRUs, anc dropout rate-were tuned on the validation set, and their optimal values are shown in Table 2. Ir general, the optimal GRU size increases and the dropout rate decreases as the corpus size increases\nThe word lookup table was initialized with 100d GloVe vectors' (Pennington et al., 2014) and OO\\ tokens at test time were assigned unique random vectors. We empirically observed that initializ ing with pre-trained embeddings gives higher performance compared to random initialization for all datasets. Furthermore, for smaller datasets (WDW and CBT) we found that fixing these embeddings. to their pretrained values led to higher test performance, possibly since it avoids overfitting. We dc. not use the character composition model for CNN and Daily Mail, since entities (and hence candi. date answers) are anonymized to generic tokens in these datasets. For other datasets the characte. lookup table was randomly initialized with 25d vectors. All other parameters were initialized tc their default values as specified in the Lasagne library.."}, {"section_index": "10", "section_name": "4.3 PERFORMANCE COMPARISON", "section_text": "Tables 3 and 5 show a comparison of the performance of GA Reader with previously publishec results on WDW and CNN, Daily Mail, CBT datasets respectively. The numbers reported for GA Reader are for single best models, though we compare to both ensembles and single models fron prior work. GA Reader-- refers to an earlier version of the model, unpublished but described in a preprint, with the following differences-(1) it does not utilize token-specific attentions within th GA module, as described in equation (5), (2) it does not use a character composition model, (3) i is initialized with word embeddings pretrained on the corpus itself rather than GloVe. A detailec analysis of these differences is studied in the next section. Here we present 4 variants of the latest GA. Reader, using combinations of whether the qe-comm feature is used (+feature) or not, and whethe the word lookup table L(w) is updated during training or fixed to its initial value..\nInterestingly, we observe that feature engineering leads to significant improvements for WDW and. CBT datasets, but not for CNN and Daily Mail datasets. We note that anonymization of the latter\nTable 3: Validation/Test accuracy (%) on WDw dataset for both \"'Strict' and \"Relaxed\" settings. Results marked with + are cf pre viously published works\nStrict Relaxed feature and with fixed L(u Model Val Test Val Test A Gating Function. Human t - 84 Attentive Reader +. 53 55 Sum 6 AS Reader + 57 59 Concatenate 6 Stanford AR + 64 65 Multiply 6 NSE + 66.5 66.2 67.0 66.7 K GA-- + 57 60.0 67.0 67.0 1 (AS) t GA (update L(w)) 67.8 66.6 GA (fix L(w)) 68.3 68.0 69.6 69.1 2 6 GA (+feature, update L(w)) 70.1 69.5 70.9 71.0 3 6 GA (+feature, fix L(w)) 71.6 71.2 72.6 72.6 4 6\nTable 5: Validation/Test accuracy (%) on CNN, Daily Mail and CBT. Results marked with + are cf previously published works. Results marked with were obtained by training on a larger training set. Best performance on standard training sets is in bold, and on larger training sets in italics.\nCNN Daily Mail CBT-NE CBT-CN Model Val Test Val Test Val Test Val Test Humans (query) + 52.0 64.4 Humans (context + query) + 81.6 81.6 LSTMs (context + query) + 51.2 41.8 62.6 56.0 Deep LSTM Reader + 55.0 57.0 63.3 62.2 Attentive Reader +. 61.6 63.0 70.5 69.0 Impatient Reader + 61.8 63.8 69.0 68.0 MemNets + 63.4 66.8 70.4 66.6 64.2 63.0 AS Reader + 68.6 69.5 75.0 73.9 73.8 68.6 68.8 63.4 DER Network + 71.3 72.9 Stanford AR (relabeling) +. 73.8 73.6 77.6 76.6 Iterative Attentive Reader +. 72.6 73.3 75.2 68.6 72.1 69.2 EpiReader + 73.4 74.0 75.3 69.7 71.5 67.4 AoA Reader + 73.1 74.4 77.8 72.0 72.2 69.4 ReasoNet + 72.9 74.7 77.6 76.6 NSE + 78.2 73.2 74.3 71.9 MemNets (ensemble) + 66.2 69.4 AS Reader (ensemble) + 73.9 75.4 78.7 77.7 76.2 71.0 71.1 68.9 Stanford AR (relabeling,ensemble) + 77.2 77.6 80.2 79.2 Iterative Attentive Reader (ensemble) + 75.2 76.1 76.9 72.0 74.1 71.0 EpiReader (ensemble) + 76.6 71.8 73.6 70.6 AS Reader (+BookTest) + 80.5 76.2 83.2 80.8 AS Reader (+BookTest,ensemble) + 82.3 78.4 85.7 83.7 GA-- 73.0 73.8 76.7 75.7 74.9 69.0 69.0 63.9 GA (update L(w)) 77.9 77.9 81.5 80.9 76.7 70.1 69.8 67.3 GA (fix L(w)) 77.9 77.8 80.4 79.6 77.2 71.4 71.6 68.0 GA (+feature, update L(w)) 77.3 76.9 80.7 80.0 77.2 73.3 73.0 69.8 GA (+feature, fix L(w)) 76.7 77.4 80.0 79.3 78.5 74.9 74.4 70.7\ndatasets means that there is already some feature engineering (it adds hints about whether a token. is an entity), and these are much larger than the other four. In machine learning it is common to see the effect of feature engineering diminish with increasing data size. Similarly, fixing the worc\nTable 4: Top: Performance of different. gating functions. Bottom: Effect of vary. ing the number of hops K. Results on WDW dataset without using the qe-comm feature and with fixed L(w)..\nFigure 2: Performance in accuracy with and without the Gated-Attention module over different amounts of training data. p-values for an exact one-sided Mcnemar's test are given inside the parentheses for each setting.\nembeddings provides an improvement for the WDW and CBT, but not for CNN and Daily Mail This is not surprising given that the latter datasets are larger and less prone to overfitting.\nComparing with prior work, on the WDw dataset the basic version of the GA Reader outperforms. all previously published models when trained on the Strict setting. By adding the qe-comm feature. the performance increases by 3.2% and 3.5% on the Strict and Relaxed settings respectively to set. a new state of the art on this dataset. On the CNN and Daily Mail datasets the GA Reader leads to an improvement of 3.2% and 4.3% respectively over the best previous single models. They also. outperform previous ensemble models, setting a new state of that art for both datasets. For CBT-NE. GA Reader with the qe-comm feature outperforms all previous single and ensemble models except. the AS Reader trained on the much larger BookTest Corpus (Bajgar et al., 2016). Lastly, on CBT. CN the GA Reader with the qe-comm feature outperforms all previously published single models. except the NSE, and AS Reader trained on a larger corpus.."}, {"section_index": "11", "section_name": "4.4 GA READER ANALYSIS", "section_text": "In this section we do an ablation study to see the effect of Gated Attention. We compare the GA. Reader as described here to a model which is exactly the same in all aspects, except that it passes. document embeddings D(k) in each layer directly to the inputs of the next layer without using the. GA module. In other words X(k) = D(k) for all k > 0. This model ends up using only one query. GRU at the output layer for selecting the answer from the document. We compare these two variants. both with and without the qe-comm feature on CNN and WDw datasets for three subsets of the. training data - 50%, 75% and 100%. Test set accuracies for these settings are shown in Figure 2. On. CNN when tested without feature engineering, we observe that GA provides a significant boost in. performance compared to without GA. When tested with the feature it still gives an improvement.. but the improvement is significant only with 100% training data. On WDw-Strict, which is a third. of the size of CNN, without the feature we see an improvement when using GA versus without using GA, which becomes significant as the training set size increases. When tested with the feature on WDw. for a small data size without GA does better than with GA, but as the dataset size increases they become equivalent. We conclude that Gated Attention provides a boost in the absence of feature engineering, or as the training set size increases..\nNext we look at the question of how to gate intermediate document reader states from the query,. i.e. what operation to use in equation 6. Table 4 (top) shows the performance on WDw dataset for. three common choices - sum (x = d+ q), concatenate (x = d|q) and mu1tiply (x = dO q).. Empirically we find that element-wise multiplication does significantly better than the other two. which justifies our motivation to \"filter' out document features which are irrelevant to the query..\nAt the bottom of Table 4 we show the effect of varying the number of hops K of the GA Reader on the final performance. We note that for K = 1, our model is equivalent to the AS Reader without any GA modules. We see a steep and steady rise in accuracy as the number of hops is increased from K = 1 to K = 3, which remains constant beyond that. This is a fairly common trend in machine learning as model complexity is increased, however we note that a multi-hop architecture is important to achieve a high performance for this task, and provide further evidence for this in the next section.\nCNN (w/o qe-comm feature) CNN (w qe-comm feature) WDW (w/o qe-comm feature) WDW (w qe-comm feature) 0.7 0.7 0.78 0.78 No Gating 0.69 0.69 0.68 With Gating x 0.68 0.76 0.76 0.67 0.67 0.74 0.74 0.66 0.66 0.65 0.65 0.72 0.72 0.64 0.64 0.7 0.7 0.63 0.63 No Gating No Gating 0.62 0.62 No Gating 0.68 0.68 With Gating With Gating 0.61 0.61 With Gating 0.66 0.66 0.6 0.6 50% 75% 100% 50% 75% 100% 50% 75% 100% 50% 75% 100% (<0.01) (<0.01) (<0.01) (0.07) (0.13) (<0.01) (0.28) (<0.01) (<0.01) (<0.01) (0.42) (0.27)\nFigure 3: Layer-wise attention visualization of GA Reader trained on WDw-Strict. See text for details\nDoc: japanese prime minister taro aso said friday he would call for stronger monitoring of international finance at the g20 summit next week in london . *' we will h ave to emphatically argue that the foundation of the international monetary fund ( imf ) is weak and that we must establish financial regulations and supervisio n , \" aso told a legislative session . other world leaders have also pushed for stricter regulations of risky and unrestrained investment practices and instrum ents blamed for triggering the current global economic crisis . japan officially agreed in february to lend up to 100 billion dollars to the imf to provide fina ncial lifelines to emerging economies hit hard by the worldwide downturn . us treasury secretary timothy geithner has said president barack obama would discuss new global financial regulatory standards at the london summit QRY: <beg> us president barack obama will push higher financial regulatory standards for across the globe at the upcoming g20 summit in london , XxX said thursday <end>\nLastly, we perform an ablation study for the three com- ponents of the GA Reader which were absent in the preprint version (GA Reader--). Table 6 shows accu- racy on WDW by removing one component at a time The steepest reduction is observed when we replace pretrained GloVe vectors with those pretrained on the corpus itself. GloVe vectors were trained on a large corpus of about 6 billion tokens (Pennington et al., 2014), and provide an important source of prior knowl- edge for the model. We note here that the strongest baseline on WDW. NSE (Munkhdalai & Yu, 2016b) also uses pretrained GloVe vectors, hence the compar- ison is fair in that respect. Next, we observe a substan- tial drop when removing token-specific attentions over the query in the GA module, which allow gating indi- vidual tokens in the document only by parts of the quer query representation. Finally, removing the character e 1"}, {"section_index": "12", "section_name": "4.5 ATTENTION VISUALIZATION", "section_text": "To gain an insight into the reading process employed by the model we analyzed the attention distri-. butions at intermediate layers of the reader. Figure 3 shows an example from the validation set of WDw dataset (several more are in the Appendix). In each figure, the left and middle plots visualize. attention over the query (equation 5) for candidates in the document after layers 1 & 2 respectively The right plot shows attention over candidates in the document of cloze placeholder (xxx) in the query at the final layer. The full document, query and correct answer are shown at the bottom..\nA generic pattern observed in these examples is that in intermediate layers, candidates in the docu. ment (shown along rows) tend to pick out salient tokens in the query which provide clues about th. cloze, and in the final layer the candidate with the highest match with these tokens is selected as th. answer. In Figure 3 there is a high attention of the correct answer on financial regulator st andards in the first layer, and on us president in the second layer. The incorrect answer, i. contrast, only attends to one of these aspects, and hence receives a lower score in the final layer de. spite the n-gram overlap it has with the cloze token in the query. Importantly, different layers tend t. focus on different tokens in the query, which supports the hypothesis that the multi-hop architectur. of GA Reader is able to combine distinct pieces of information to answer the query..\n1.0 0.9 mir tar 0.8 aso 0.7 said 1 0.6 0.5 aso asc aso 0.4 told 0.3 etary sec 0.2 othy h in 0.1 0.0 we willh\nTable 6: Ablation study on WDw dataset, without using the qe-comm feature and with fixed L(w). Results marked with + are cf On- ishi et al. (2016).\nAccuracy Model Val Test GA 68.3 68.0 -char 66.9 66.9 token-attentions (eq. 5) 65.7 65.0 -glove, +corpus 64.0 62.5 GA--+ 57"}, {"section_index": "13", "section_name": "5 CONCLUSION", "section_text": "We presented the Gated-Attention reader for answering cloze-style questions over documents. The GA reader features a novel multiplicative gating mechanism, combined with a multi-hop architec ture. Our model achieves state-of-the-art performance on several large-scale benchmark datasets with more than 4% improvements over competitive baselines. Our model design is backed up by ar ablation study showing statistically significant improvements of using Gated Attention as informa tion filters. We also showed empirically that multiplicative gating is superior to addition and con catenation operations for implementing gated-attentions, though a theoretical justification remains part of future research goals. Analysis of document and query attentions in intermediate layers oi the reader further reveals that the model iteratively attends to different aspects of the query to arrive at the final answer. In this paper we have focused on text comprehension, but we believe that the Gated-Attention mechanism may benefit other tasks as well where multiple sources of information interact. Concurrent to our work (Chu et al., 2016) have also shown the effectiveness of GA Readers on the LAMBADA dataset (Paperno et al., 2016) for language modeling."}, {"section_index": "14", "section_name": "REFERENCES", "section_text": "Danqi Chen, Jason Bolton, and Christopher D Manning. A thorough examination of the cnn/daily mail reading comprehension task. arXiv preprint arXiv:1606.02858, 2016.\nYiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. Attention-over. attention neural networks for reading comprehension. arXiv preprint arXiv:1607.04423, 2016.\nBhuwan Dhingra, Zhong Zhou, Dylan Fitzpatrick, Michael Muehl, and William W Cohen Tweet2vec: Character-based distributed representations for social media. ACL, 2016\nKarl Moritz Hermann. Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustaf Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Advances ir Neural Information Processing Systems, pp. 1684-1692, 2015.\nFelix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Readin, children's books with explicit memory representations. arXiv preprint arXiv:1511.02301, 2015.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8) 1735-1780, 1997.\nRudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. Text understanding with the attention sum reader network. arXiv preprint arXiv:1603.01547. 2016.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.\nDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014..\nSosuke Kobayashi, Ran Tian, Naoaki Okazaki, and Kentaro Inui. Dynamic entity representations with max-pooling improves machine reading. In NAACL-HLT, 2016..\nAnkit Kumar, Ozan Irsoy, Jonathan Su, James Bradbury, Robert English, Brian Pierce, Peter On druska, Ishaan Gulrajani, and Richard Socher. Ask me anything: Dynamic memory networks for natural language processing. arXiv preprint arXiv:1506.07285, 2015.\nPeng Li, Wei Li, Zhengyan He, Xuguang Wang, Ying Cao, Jie Zhou, and Wei Xu. Dataset and neura recurrent sequence labeling model for open-domain factoid question answering. arXiv preprin arXiv:1607.06275, 2016.\nTsendsuren Munkhdalai and Hong Yu. Neural semantic encoders. arXiv preprint arXiv:1607.04315 2016a.\nTakeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gimpel, and David McAllester. Who did what: A large-scale person-centered cloze dataset. EMNLP, 2016.\nRazvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. arXiv preprint arXiv:1211.5063, 2012.\nSainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End-to-end memory networks. In Advances in Neural Information Processing Systems. pp. 2431-2439. 2015\nYuhuai Wu, Saizheng Zhang, Ying Zhang, Yoshua Bengio, and Ruslan Salakhutdinov. On multi plicative integration with recurrent neural networks. arXiv preprint arXiv:1606.06630. 2016\nBishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. Learning multi-relational semantics using neural-embedding models. arXiv preprint arXiv:1411.4072, 2014.\nZhilin Yang, Ruslan Salakhutdinov, and William Cohen. Multi-task cross-lingual sequence tagging from scratch. arXiv preprint arXiv:1603.06270, 2016\nDenis Paperno, German Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi. Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernandez. The lambada dataset: Word prediction requiring a broad discourse context. arXiv preprint arXiv:1606.06031. 2016.\nAdam Trischler, Zheng Ye, Xingdi Yuan, and Kaheer Suleman. Natural language comprehension with the epireader. arXiv preprint arXiv:1606.02270, 2016."}, {"section_index": "15", "section_name": "A ATTENTION PLOTS", "section_text": "Figure 4: Layer-wise attention visualization of GA Reader trained on WDw-Strict. See text for details\nDOC: result sunday from the open 13 , a ( euro ) 512,750 ( $ 697,400 ) atp world tour indoor hardcourt event at palais des sports ( seedings in parentheses ) : sin es final michael llodra , france , def . julien benneteau ( 8 ) , france , 6-3 , 6-4. doubles final michael llodra and julien benneteau , france ( 2 ) , def . j ulian knowle, austria and robert lindstedt , sweden (1 ), 6-4 , 6-3..\n1.0 0.9 0.8 0.7 0.6 sid 0.5 0.4 0.3 0.2 0.1 0.0\nsidebottom ryan giving ricky ponting confident andrew strauss e yorrsh\nDOC: england pace bowler ajmal shahzad has warned australia that his * fearless \" team- mates are ready to deal their rivals a psychological blow in the forthcomin g one-day series . shahzad has been called into england 's squad for five matches against australia as cover for the injured ryan sidebottom and he senses an ex tremely positive mood in the dressing room . the 24-year-old admits england are desperate to warm up for the ashes in australia later this year by giving ricky ponting 's men another beating after defeating them in the recent icc world twenty2o final in the caribbean . and shahzad , the first british-born asian to play for yorkshire , is confident andrew strauss 's side wo n't back down against the ultra-aggressive australians . *' it will be a step up for the lads . but ever ybody is focused and ready for it , \" shahzad said . '' there 's no fear , no nerves - and it 's nice to be part of that kind of dressing room . \" sidebottom 's hamstring problem has given shahzad his opportunity in the one-day squad and he is determined to seize the opportunity to stake his claim for a permanent pla ce by impressing against the australians . ' it 's just nice to be called up , and i hope if my chance comes i can grasp it with both hands , \" he said . ' a Iot 's happened to me in the last six months . if the chance comes - ryan 's got a niggle , but i do n't think it 's too bad - i 've just got to put in a decen t performance and have my name in the hat for the games to come . * but it 's just nice to be involved , and know you 're there or thereabouts . i just hope i can get the nod . \"\n1.0 0.9 julien julier 0.8 benneteau 0.7 and anc julien julier uien 0.6 ete real benneteau 0.5 0.4 iulian julian juliar 0.3 knowle knowle 0.2 and and and robert robert robert 0.1 df lindstedt XXX 0.0 fraeee Panrer Parrer XXx <pu S one Paarnr enn e\nFigure 5: Layer-wise attention visualization of GA Reader trained on WDw-Strict. See text for detail\n1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 Doc: us secretary of state hillary clinton sought to break the ice with her russian counterpart on friday by handing him a fake. reset \" button -- or at least tha t was what it was supposed to be . clinton handed russian foreign minister sergei lavrov the button wrapped in a ribbon as they began their first meeting in a I. uxury hotel in geneva . earlier this year , us vice president joe biden told foreign leaders at an international security conference in munich , southern german y , that the obama administration wanted to improve ties with moscow . ** it is time to press the reset button and to revisit the many areas where we can and sh. ould work together, \" biden said . as she proffered the red plastic button , clinton told lavrov : '' we want to reset our relationship . and so we will do it. together , \" she said , laughing . but the button also bore a russian word that was meant to translate as *' reset \" .. we worked hard to get the right rus sian word . do you think we got it ? \" clinton asked lavrov . '. you got it wrong , \" he responded as they both laughed . :' it should be 'perezagrouzka '( t. he russian word for 'reset '), \" the russian foreign minister pointed out . ** this says , peregruzka , which means overcharged . \" :* we wo n't let you do. that to us , \" clinton replied . russian speakers indicated that the mistaken word was better translated as 'overload . ' lavrov promised to keep it on his des. K QRy: <beg> us secretary of state hillary clinton meets xxx on friday and she said she is sure they will not overcome all differences . <end>. ANS: sergei lavrov .0 0.9 0.8 AeC iC\nQRY: <beg> us secretary of state hillary clinton meets Xxx on friday and she said she is sure they will not overcome all differences . <end. ANS: sergei lavrov\nQRY: <beg> president-elect barack obama In a 's senate sea. end>\nFigure 6: Layer-wise attention visualization of GA Reader trained on WDw-Strict. See text for detail\nC DOc: european union officials complained on tuesday about the lack of gas flow from russia through ukraine to europe after russia resumed early gas supplies under. three- way deal signed on the previous day . european commission president jose manuel barroso spoke by phone to russian prime minister vladimir putin , express. ing disappointment over the lack of natural gas flowing to europe . eu monitors on the ground reported that only very little gas is flowing through the pipeline. s . barroso voiced his *' disappointment with both the level of gas flowing to europe \" and the lack of access '' of our monitors to dispatch centers, \" acco. rding to his aide . putin promised him to take a look into what he complained . on the same day , czech prime minister mirek topolanek spoke on the phone to his. ukrainian counterpart yulia tymoshenko about the matter , said a press release from the czech eu presidency . tymoshenko informed topolanek , who asked about t. he causes and circumstances of the delay in supplies , of some technical difficulties , saying that more specifically the pressure of gas arriving from the russ. ia is too low . the czech prime minister recommended her to immediately contact the eurogaz experts who are ready to assist ukraine with technical problems . ty moshenko promised to act on this offer . russia reopened taps tuesday morning to let gas flow to europe via ukraine after cutting off gas supplies to europe on. wednesday amid a pricing dispute with ukraine . the cutoff left a number of european countries in lack of heating gas amid freezing weather .\nQRY: <beg> europe the pointment aine to the blc an aide said . <end>\n1.0 siden 0.9 a olas rkozy 0.8 on on on 0.7 L 0.6 1O n in 0.5 0.4 rkozy sarkozy koz 0.3 0.2 emp 0.1 be 0.0 7\nDoc: president of the palestinian authority mahmud abbas held talks with french president nicolas sarkozy on friday , and slammed the israeli plan of constructing mo. re settlement buildings on the west bank as '' unacceptable . \" ' that is not acceptable , \" said abbas after meeting with sarkozy in the elysee palace . it. was reported that israeli prime minister benjamin netanyahu planned to approval the construction of new home buildings on the west bank before considering a fre eze on settlement activities . :' we want a freeze on settlement and the launch of negotiations on the final phase of it , \" abbas said . '' this was the main. subject of our talks . \" according to a statement from the elysee palace , talks between the two leaders were aimed at starting again the peace process within. the palestinian territories , as well as discussing regional issues . during the meeting , sarkozy emphasized the urgency of the resumption of a negotiation pro. cess between israel and palestine . this has been abbas ' third official visit to france since 2oo7 ..\n1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0\naeli premier approves a fast expansion of west bank settlements befo nsidering a end>\nFigure 7: Layer-wise attention visualization of GA Reader trained on WDW-Strict. See text for details\n1.0 0.9 0.8 0.7 da la 0.6 SCO an 0.5 0.4 SCC 0.3 10 for for 0.2 0.1 C 0.0 9 -pi deq\nQRY: <beg> barcelona cruised into the knockout round of the champions league wednesday by beating panathinaikos aftel W o goals from pedro rodriguez and anoth. from XXX . <end> ANS: lionel messi\nDOc: dinara safina has barely avoided becoming the first no . 1-seeded woman to lose in the first round at the u.s. open . safina overcame 11 double-faults and 48 un forced errors to come back and beat 167th-ranked olivia rogowska of australia 6-7 (5 ), 6-2 , 6-4 in arthur ashe stadium on tuesday . safina , the younger sis ter of two-time major champion marat safin , moved up to no . 1 in the rankings in april -- and is assured of staying there no matter what happens at flushing m eadows . the russian reached the finals at the australian open and french open this year , losing both . rogowska is 18 , received a wild-card invitation into t he u.s. open and has won one grand slam match . she never has defeated anyone ranked better than 47th .\nORYbeq> critics and justify title at the us open the. same way XXx did in 2000. <end>\nDoc: if there was a shred of doubt where the world cup was built and won for spain this year , it was removed monday night when barcelona destroyed real madrid , 5-0 in teeming rain , host barcelona simply outplayed real , which led the spanish league until monday . barcelona 's lineup contained eight home-bred players , s even of them world champions . xavi hernandez and pedro rodriguez scored , david villa scored twice , and jeffren suarez scored the fifth as a substitute . the loss ended madrid 's 26-game unbeaten streak . lionel messi for once did not score . but messi struck a post, and he was involved in three of the goals . indee d , by taking a deeper role and taking considerable brutish tackles , he epitomized barcelona 's collective will to work for one another . QRY: <beg> barcelona cruised into the knockout round of the champions league wednesday by beating panathinaikos 3-0 after two goals from pedro rodriguez and another from XXX . <end> ANS: lionel messi\n1.0 0.9 167th-ranked ia 0.8 0.7 0.6 0.5 wska owska rog 0.4 is IS iS 0.3 ion champion 0.2 a 0.1 0.0"}] |
S1VaB4cex | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Residual networks (He et al., 2016a), or ResNets, lead a recent and dramatic increase in both depth and accuracy of convolutional neural networks, facilitated by constraining the network to learn residuals ResNet variants (He et al., 2016a;b; Huang et al., 2016b) and related architectures (Srivastava et al. 2015) employ the common technique of initializing and anchoring, via a pass-through channel, a. network to the identity function. Training now differs in two respects. First, the objective changes. to learning residual outputs, rather than unreferenced absolute mappings. Second, these networks. exhibit a type of deep supervision (Lee et al., 2014), as near-identity layers effectively reduce distance. to the loss. He et al. (2016a) speculate that the former, the residual formulation itself, is crucial..\nWe show otherwise, by constructing a competitive extremely deep architecture that does not rely on. residuals. Our design principle is pure enough to communicate in a single word, fractal, and a simple. diagram (Figure 1). Yet, fractal networks implicitly recapitulate many properties hard-wired into. previous successful architectures. Deep supervision not only arises automatically, but also drives a type of student-teacher learning (Ba & Caruana, 2014; Urban et al., 2017) internal to the network. Modular building blocks of other designs (Szegedy et al., 2015; Liao & Carneiro, 2015) resemble. special cases of a fractal network's nested substructure..\nThe entirety of emergent behavior resulting from a fractal design may erode the need for recent. engineering tricks intended to achieve similar effects. These tricks include residual functional forms . with identity initialization, manual deep supervision, hand-crafted architectural modules, and student teacher training regimes. Section 2 reviews this large body of related techniques. Hybrid designs . could certainly integrate any of them with a fractal architecture; we leave open the question of the. degree to which such hybrids are synergistic.."}, {"section_index": "1", "section_name": "ULTRA-DEEP NEURAL NETWORKS WITHOUT RESIDUALS", "section_text": "Gregory Shakhnarovich\nmmaire@ttic.edu\ngreg@ttic.edu\nWe introduce a design strategy for neural network macro-architecture based on self similarity. Repeated application of a simple expansion rule generates deep networks whose structural layouts are precisely truncated fractals. These networks contain interacting subpaths of different lengths, but do not include any pass-through or residual connections; every internal signal is transformed by a filter and nonlinearity before being seen by subsequent layers. In experiments, fractal networks match the excellent performance of standard residual networks on both CIFAR and ImageNet classification tasks, thereby demonstrating that residual representations may not be fundamental to the success of extremely deep convolutional neural networks. Rather, the key may be the ability to transition, during training, from effectively shallow to deep. We note similarities with student-teacher behavior and develop drop-path, a natural extension of dropout, to regularize co-adaptation of subpaths in fractal architectures. Such regularization allows extraction of high performance fixed-depth subnetworks. Additionally, fractal networks exhibit an anytime property: shallow subnetworks provide a quick answer, while deeper subnetworks, with higher latency, provide a more accurate answer.\nFor fractal networks, simplicity of training mirrors simplicity of design. A single loss, attached to the. final layer, suffices to drive internal behavior mimicking deep supervision. Parameters are randomly initialized. As they contain subnetworks of many depths, fractal networks are robust to choice of overall depth: make them deep enough and training will carve out a useful assembly of subnetworks\nFigure 1: Fractal architecture. Left: A simple expansion rule generates a fractal architecture with. C intertwined columns. The base case, f1(z), has a single layer of the chosen type (e.g. convolutional. between input and output. Join layers compute element-wise mean. Right: Deep convolutional. networks periodically reduce spatial resolution via pooling. A fractal version uses fc as a building block between pooling layers. Stacking B such blocks yields a network whose total depth, measured. in terms of convolution layers, is B : 2C-1. This example has depth 40 (B = 5, C = 4).\nAs an additional contribution, we develop drop-path, a novel regularization protocol for ultra deep fractal networks. Without data augmentation, fractal networks, trained with drop-path and dropout (Hinton et al., 2012), exceed the performance of residual networks regularized via stochastic depth (Huang et al., 2016b). Though, like stochastic depth, it randomly removes macro-scale components, drop-path further exploits our fractal structure in choosing which components to disable\nDrop-path constitutes not only a regularization strategy, but also provides means of optionally. imparting fractal networks with anytime behavior. A particular schedule of dropped paths during. learning prevents subnetworks of different depths from co-adapting. As a consequence, both shallow. and deep subnetworks must individually produce correct output. Querying a shallow subnetwork thus yields a quick and moderately accurate result in advance of completion of the full network..\nFractal Expansion Rule x Z Block 1 fc Block 2 Block 3 fc(z Block 4 Layer Key Convolution Block 5 Join Pool Prediction f4(z) y\nWe introduce FractalNet, the first simple alternative to ResNet. FractalNet shows tha explicit residual learning is not a requirement for building ultra-deep neural networks. Through analysis and experiments, we elucidate connections between FractalNet and an array of phenomena engineered into previous deep network designs.\nSection 3 elaborates the technical details of fractal networks and drop-path. Section 4 provides experimental comparisons to residual networks across the CIFAR-10, CIFAR-100 (Krizhevsky 2009), SVHN (Netzer et al., 2011), and ImageNet (Deng et al., 2009) datasets. We also evaluate. regularization and data augmentation strategies, investigate subnetwork student-teacher behavior during training, and benchmark anytime networks obtained using drop-path. Section 5 provides synthesis. By virtue of encapsulating many known, yet seemingly distinct, design principles, self- similar structure may materialize as a fundamental component of neural architectures..\nDeepening feed-forward neural networks has generally returned dividends in performance. A striking example within the computer vision community is the improvement on the ImageNet (Deng et al. 2009) classification task when transitioning from AlexNet (Krizhevsky et al., 2012) to VGG (Si- monyan & Zisserman, 2015) to GoogLeNet (Szegedy et al., 2015) to ResNet (He et al., 2016a) Unfortunately, greater depth also makes training more challenging, at least when employing a first- order optimization method with randomly initialized layers. As the network grows deeper and more non-linear, the linear approximation of a gradient step becomes increasingly inappropriate. Desire to overcome these difficulties drives research on both optimization techniques and network architectures\nOn the optimization side, much recent work yields improvements. To prevent vanishing gradients ReLU activation functions now widely replace sigmoid and tanh units (Nair & Hinton, 2010). This subject remains an area of active inquiry, with various tweaks on ReLUs, e.g. PReLUs (He et al., 2015) and ELUs (Clevert et al., 2016). Even with ReLUs, employing batch normalization (Ioffe & Szegedy 2015) speeds training by reducing internal covariate shift. Good initialization can also ameliorate this problem (Glorot & Bengio, 2010; Mishkin & Matas, 2016). Path-SGD (Neyshabur et al., 2015) offers an alternative normalization scheme. Progress in optimization is somewhat orthogonal to our architectural focus, with the expectation that advances in either are ripe for combination.\nJotable ideas in architecture reach back to skip connections, the earliest example of a nontrivi outing pattern within a neural network. Recent work further elaborates upon them (Maire et al., 201 Iariharan et al., 2015). Highway networks (Srivastava et al., 2015) and ResNet (He et al., 2016a;l ffer additional twists in the form of parameterized pass-through and gating. In work subsequet o our own, Huang et al. (2016a) investigate a ResNet variant with explicit skip connections. Thes nethods share distinction as the only other designs demonstrated to scale to hundreds of layers an eyond. ResNet's building block uses the identity map as an anchor point and explicitly parameterize n additive correction term (the residual). Identity initialization also appears in the context of recurrer etworks (Le et al., 2015). A tendency of ResNet and highway networks to fall-back to the identit nap may make their effective depth much smaller than their nominal depth.\nSome prior results hint at what we experimentally demonstrate in Section 4. Namely, reduction of effective depth is key to training extremely deep networks; residuals are incidental. Huang et al (2016b) provide one clue in their work on stochastic depth: randomly dropping layers from ResNet during training, thereby shrinking network depth by a constant factor, provides additional performance benefit. We build upon this intuition through drop-path, which shrinks depth much more drastically\nThe success of deep supervision (Lee et al., 2014) provides another clue that effective depth is crucial. Here, an auxiliary loss, forked off mid-level layers, introduces a shorter path during backpropagation The layer at the fork receives two gradients, originating from the main loss and the auxiliary. loss, that are added together. Deep supervision is now common, being adopted, for example, by GoogLeNet (Szegedy et al., 2015). However, irrelevance of the auxiliary loss at test time introduces the drawback of having a discrepancy between the actual objective and that used for training..\nExploration of the student-teacher paradigm (Ba & Caruana, 2014) illuminates the potential for interplay between networks of different depth. In the model compression scenario, a deeper network (previously trained) guides and improves the learning of a shallower and faster student network (Ba & Caruana, 2014; Urban et al., 2017). This is accomplished by feeding unlabeled data through the teacher and having the student mimic the teacher's soft output predictions. FitNets (Romero et al. 2015) explicitly couple students and teachers, forcing mimic behavior across several intermediate points in the network. Our fractal networks capture yet another alternative, in the form of implicit coupling, with the potential for bidirectional information flow between shallow and deep subnetworks\nWidening networks, by using larger modules in place of individual layers, has also produced per formance gains. For example, an Inception module (Szegedy et al., 2015) concatenates results of convolutional layers of different receptive field size. Stacking these modules forms the GoogLeNet ar chitecture. Liao & Carneiro (2015) employ a variant with maxout in place of concatenation. Figure 1 makes apparent our connection with such work. As a fractal network deepens, it also widens. More over, note that stacking two 2D convolutional layers with the same spatial receptive field (e.g. 3 3) achieves a larger (5 5) receptive field. A horizontal cross-section of a fractal network is reminiscent of an Inception module, except with additional joins due to recursive structure.\nThe join operation merges two feature blobs into one. Here, a blob is the result of a conv layer: a tensor holding activations for a fixed number of channels over a spatial domain. The channel coun corresponds to the size of the filter set in the preceding conv layer. As the fractal is expanded, we collapse neighboring joins into a single join layer which spans multiple columns, as shown on the right side of Figure 1. The join layer merges all of its input feature blobs into a single output blob.\nSeveral choices seem reasonable for the action of a join layer, including concatenation and addition. We instantiate each join to compute the element-wise mean of its inputs. This is appropriate for. convolutional networks in which channel count is set the same for all conv layers within a fractal block Averaging might appear similar to ResNet's addition operation, but there are critical differences:.\nTogether, these properties ensure that join layers are not an alternative method of residual learning"}, {"section_index": "2", "section_name": "3.1 REGULARIZATION YIA DROP-PATH", "section_text": "Dropout (Hinton et al., 2012) and drop-connect (Wan et al., 2013) modify interactions betweer sequential network layers in order to discourage co-adaptation. Since fractal networks contail additional macro-scale structure, we propose to complement these techniques with an analogous coarse-scale regularization scheme\nFigure 2 illustrates drop-path. Just as dropout prevents co-adaptation of activations, drop-path prevents co-adaptation of parallel paths by randomly dropping operands of the join layers. This discourages the network from using one input path as an anchor and another as a corrective term (a configuration that, if not prevented, is prone to overfitting). We consider two sampling strategies:\nWe begin with a more formal presentation of the ideas sketched in Figure 1. Convolutional neural networks serve as our running example and, in the subsequent section, our experimental platform However, it is worth emphasizing that our framework is more general. In principle, convolutional. layers in Figure 1 could be replaced by a different layer type, or even a custom-designed module or. subnetwork, in order to generate other fractal architectures..\nLet C denote the index of the truncated fractal fc(). Our network's structure, connections and layer types, is defined by fc(). A network consisting of a single convolutional layer is the base case:.\nf1(z) = conv(z)\nfc+1(z) =[(fc ofc)(z)[[conv(z)\nwhere o denotes composition and a join operation. When drawn in the style of Figure 1, C corresponds to the number of columns, or width, of network fc(). Depth, defined to be the number of conv layers on the longest path between input and output, scales as 2C-1. Convolutional networks for classification typically intersperse pooling layers. We achieve the same by using fc(.) as a building block and stacking it with subsequent pooling layers B times, yielding total depth B . 2C-1.\nResNet makes clear distinction between pass-through and residual signals. In FractalNet, nc. signal is privileged. Every input to a join layer is the output of an immediately preceding conv layer. The network structure alone cannot identify any as being primary.. Drop-path regularization, as described next in Section 3.1, forces each input to a join to be. individually reliable. This reduces the reward for even implicitly learning to allocate part of. one signal to act as a residual for another.. Experiments show that we can extract high-performance subnetworks consisting of a single column (Section 4.2). Such a subnetwork is effectively devoid of joins, as only a single path. is active throughout. They produce no signal to which a residual could be added..\nLocal: a join drops each input with fixed probability, but we make sure at least one survives. Global: a single path is selected for the entire network. We restrict this path to be a single column, thereby promoting individual columns as independently strong predictors.\nIteration #1 Iteration #2 Iteration #3 Iteration #4 (Local) (Global) (Local) (Global)\nAs with dropout, signals may need appropriate rescaling. With element-wise means, this is trivi each join computes the mean of only its active inputs.\nIn experiments, we train with dropout and a mixture model of 50% local and 50% global sampling. for drop-path. We sample a new subnetwork each mini-batch. With sufficient memory, we can simultaneously evaluate one local sample and all global samples for each mini-batch by keeping separate networks and tying them together via weight sharing..\nGlobal drop-path serves not only as a regularizer, but also as a diagnostic tool. Monitoring perfor. mance of individual columns provides insight into both the network and training mechanisms, as Section 4.3 discusses in more detail. Individually strong columns of various depths also give users. choices in the trade-off between speed (shallow) and accuracy (deep).."}, {"section_index": "3", "section_name": "3.2 DATA AUGMENTATION", "section_text": "Data augmentation can reduce the need for regularization. ResNet demonstrates this, achieving 27.22% error rate on CIFAR-100 with augmentation compared to 44.76% without (Huang et al.. 2016b). While augmentation benefits fractal networks, we show that drop-path provides highly. effective regularization, allowing them to achieve competitive results even without data augmentation"}, {"section_index": "4", "section_name": "3.3 IMPLEMENTATION DETAILS", "section_text": "We implement FractalNet using Caffe (Jia et al., 2014). Purely for convenience, we flip the ordei of pool and join layers at the end of a block in Figure 1. We pool individual columns immediately before the joins spanning all columns, rather than pooling once immediately after them.\nWe train fractal networks using stochastic gradient descent with momentum. As now standard, we employ batch normalization together with each conv layer (convolution, batch norm, then ReLU).\nFigure 2: Drop-path. A fractal network block functions with some connections between layers. disabled, provided some path from input to output is still available. Drop-path guarantees at least one. such path, while sampling a subnetwork with many other paths disabled. During training, presenting. a different active subnetwork to each mini-batch prevents co-adaptation of parallel paths. A global. sampling strategy returns a single column as a subnetwork. Alternating it with local sampling. encourages the development of individual columns as performant stand-alone subnetworks.\nWhile fractal connectivity permits the use of paths of any length, global drop-path forces the use of many paths whose lengths differ by orders of magnitude (powers of 2). The subnetworks sampled by drop-path thus exhibit large structural diversity. This property stands in contrast to stochastic depth regularization of ResNet, which, by virtue of using a fixed drop probability for each layer in a chain samples subnetworks with a concentrated depth distribution (Huang et al., 2016b)..\nMethod C100 C100+ C100++ C10 C10+ C10++ SVHN Network in Network (Lin et al.. 2013) 35.68 10.41 8.81 2.35 Generalized Pooling (Lee et al., 2016) 32.37 7.62 6.05 1.69 Recurrent CNN (Liang & Hu, 2015) 31.75 8.69 7.09 1.77 Multi-scale (Liao & Carneiro, 2015) 27.56 + 6.87 1.76 FitNet Romero et al. (2015) 35.04 8.39 2.42 Deeply Supervised (Lee et al., 2014) 34.57 9.69 7.97 1.92 All-CNN (Springenberg et al., 2014) 33.71 9.08 7.25 4.41 Highway Net (Srivastava et al., 2015). 32.39 7.72 ELU (Clevert et al., 2016) 24.28 6.55 Scalable BO (Snoek et al., 2015) 27.04 \\ 6.37 1.77 Fractional Max-Pool (Graham, 2014) 26.32 3.47 FitResNet (Mishkin & Matas, 2016) 27.66 5.84 ResNet (He et al., 2016a) 6.61 - ResNet by (Huang et al., 2016b) 44.76 : 27.22 13.63 6.41 2.01 Stochastic Depth (Huang et al., 2016b) 37.80 24.58 11.66 5.23 1.75 Identity Mapping (He et al., 2016b). 22.68 4.69 ResNet in ResNet (Targ et al., 2016) 22.90 5.01 Wide (Zagoruyko & Komodakis, 2016) 20.50 - 4.17 DenseNet-BC (Huang et al., 2016a) 19.641 17.60 5.19 3.62 1.74 FractalNet (20 layers, 38.6M params) 35.34 1 23.30 22.85 10.18 5.22 5.11 2.01 + drop-path + dropout 28.20 23.73 23.36 7.33 4.60 4.59 1.87 L> deepest column alone 29.05 24.32 23.60 7.27 4.68 4.63 1.89 FractalNet (40 layers, 22.9M params)? 22.49 21.49 5.24 5.21\nTable 1: CIFAR-1o0/CIFAR-10/SVHN. We compare test error (%) with other leading methods trained with either no data augmentation, translation/mirroring (+), or more substantial augmentation (++). Our main point of comparison is ResNet. We closely match its benchmark results using data augmentation, and outperform it by large margins without data augmentation. Training with drop-path, we can extract from FractalNet single-column (plain) networks that are highly competitive"}, {"section_index": "5", "section_name": "4 EXPERIMENTS", "section_text": "The CIFAR, SVHN, and ImageNet datasets serve as testbeds for comparison to prior work and analysis of FractalNet's internal behavior. We evaluate performance on the standard classification task associated with each dataset. For CIFAR and SVHN, which consist of 32 32 images, we set our fractal network to have 5 blocks (B = 5) with 2 2 non-overlapping max-pooling and subsampling applied after each. This reduces the input 32 32 spatial resolution to 1 1 over the course of the entire network. A softmax prediction layer attaches at the end of the network. Unless otherwise noted we set the number of filter channels within blocks 1 through 5 as (64. 128. 256. 512, 512). mostly matching the convention of doubling the number of channels after halving spatial resolution.\nFor experiments using dropout, we fix drop rate per block at (0%, 10%, 20%, 30%, 40%), similar to Clevert et al. (2016). Local drop-path uses 15% drop rate across the entire network.\n1Densely connected networks (DenseNets) are concurrent work, appearing subsequent to our original arXiv paper on FractalNet. A variant of residual networks, they swap addition for concatenation in the residual functional form. We report performance of their 250-layer DenseNet-BC network with growth rate k = 24.\nFor ImageNet, we choose a fractal architecture to facilitate direct comparison with the 34-layer. ResNet of He et al. (2016a). We use the same first and last layer as ResNet-34, but change the middle of the network to consist of 4 blocks (B = 4), each of 8 layers (C = 4 columns). We use a filter. channel progression of (128, 256, 512, 1024) in blocks 1 through 4..\n2This deeper (4 column) FractalNet has fewer parameters. We vary column width: (128, 64, 32, 16) channels across columns initially, doubling each block except the last. A linear projection temporarily widens thinner. columns before joins. As in Iandola et al. (2016), we switch to a mix of 1 1 and 3 3 convolutional filters.\nTable 2: ImageNet (validation set, 10-crop)\nCols. Depth Params. Error (%) 1 5 0.3M 37.32 2 10 0.8M 30.71 3 20 2.1M 27.69 4 40 4.8M 27.38 5 80 10.2M 26.46 6 160 21.1M 27.38\nTable 3: Ultra-deep fractal networks (CIFAR-100++). Increasing depth greatly im proves accuracy until eventual diminishing returns. Contrast with plain networks, which are not trainable if made too deep (Table 4)\nA widely employed (Lin et al., 2013; Clevert et al., 2016; Srivastava et al., 2015; He et al., 2016a;b. Huang et al., 2016b; Targ et al., 2016) scheme for data augmentation on CIFAR consists of only horizontal mirroring and translation (uniform offsets in -4, 4D), with images zero-padded where. needed after mean subtraction. We denote results achieved using no more than this degree of. augmentation by appending a \"+'' to the dataset name (e.g. CIFAR-100+). A \"++' marks results. reliant on more data augmentation; here exact schemes may vary. Our entry in this category is modest. and simply changes the zero-padding to reflect-padding."}, {"section_index": "6", "section_name": "4.2 RESULTS", "section_text": "Table 1 compares performance of FractalNet on CIFAR and SVHN with competing methods. Frac. talNet (depth 20) outperforms the original ResNet across the board. With data augmentation, our. CIFAR-100 accuracy is close to that of the best ResNet variants. With neither augmentation nor regu. larization, FractalNet's performance on CIFAR is superior to both ResNet and ResNet with stochastic. depth, suggesting that FractalNet may be less prone to overfitting. Most methods perform similarly. on SVHN. Increasing depth to 40, while borrowing some parameter reduction tricks (Iandola et al.,. 2016). reveals FractalNet's performance to be consistent across a range of configuration choices\nNote that the performance of the deepest column of the fractal network is close to that of the full network (statistically equivalent on CIFAR-1O). This suggests that the fractal structure may be more important as a learning framework than as a final model architecture\nTable 2 shows that FractalNet scales to ImageNet, matching ResNet (He et al., 2016a) at equal depth Note that, concurrent with our work, refinements to the residual network paradigm further improve the state-of-the-art on ImageNet. Wide residual networks (Zagoruyko & Komodakis, 2016) of 34-layers. reduce single-crop Top-1 and Top-5 validation error by approximately 2% and 1%, respectively, ove.\nModel Depth Train Loss. Error (%) Plain 5 0.786 36.62 Plain 10 0.159 32.47 Plain 20 0.037 31.31 Plain 40 0.580 38.84 Fractal Col #1 5 0.677 37.23 Fractal Col #2 10 0.141 32.85 Fractal Col #3 20 0.029 31.31 Fractal Col #4 40 0.016 31.75 Fractal Full 40 0.015 27.40\nTable 4: Fractal structure as a training appara- tus (CIFAR-100++). Plain networks perform well if moderately deep, but exhibit worse convergence dur ing training if instantiated with great depth. How- ever, as a column trained within, and then extracted from, a fractal network with mixed drop-path, we recover a plain network that overcomes such depth limitation (possibly due to a student-teacher effect)\nWe run for 400 epochs on CIFAR, 20 epochs on SVHN, and 70 epochs on ImageNet. Our learning. rate starts at 0.02 (for ImageNet, 0.001) and we train using stochastic gradient descent with batch size 100 (for ImageNet, 32) and momentum 0.9. For CIFAR/SVHN, we drop the learning rate by a. factor of 10 whenever the number of remaining epochs halves. For ImageNet, we drop by a factor of. 10 at epochs 50 and 65. We use Xavier initialization (Glorot & Bengio, 2010)..\nExperiments without data augmentation highlight the power of drop-path regularization. On CIFAR. 100, drop-path reduces FractalNet's error rate from 35.34% to 28.20%. Unregularized ResNet is far. behind (44.76%) and ResNet with stochastic depth (37.80%) does not catch up to our unregularized. starting point of 35.34%. CIFAR-10 mirrors this story. With data augmentation, drop-path provides a. boost (CIFAR-1O), or does not significantly influence FractalNet's performance (CIFAR-100).\nPlain Networks FractalNet 10 101 5 layers Col #1: 5 layers 10 layers Col #2: 10 layers 20 layers Col #3: 20 layers 40 layers Col #4: 40 layers FractalNet 100 100 10-1 10-1 ..... ..... -.. ... 0 50 100 150 200 250 300 350 400 0 50 100 150 200 250 300 350 400 Epochs Epochs\nFigure 3: Implicit deep supervision. Left: Evolution of loss for plain networks of depth 5, 10, 2. and 40 trained on CIFAR-100. Training becomes increasingly difficult for deeper networks. At 4 layers, we are unable to train the network satisfactorily. Right: We train a 4 column fractal networl. with mixed drop-path, monitoring its loss as well as the losses of its four subnetworks corresponding. to individual columns of the same depth as the plain networks. As the 20-layer subnetwork starts t. stabilize, drop-path puts pressure on the 40-layer column to adapt, with the rest of the network as it.. teacher. This explains the elbow-shaped learning curve for Col #4 that occurs around 25 epochs..\nResNet-34 by doubling feature channels in each layer. DenseNets (Huang et al., 2016a) substantiall improve performance by building residual blocks that concatenate rather than add feature channels\nTable 3 demonstrates that FractalNet resists performance degradation as we increase C to obtain extremely deep networks (160 layers for C = 6). Scores in this table are not comparable to those in Table 1. For time and memory efficiency, we reduced block-wise feature channels to (16, 32, 64, 128, 128) and the batch size to 50 for the supporting experiments in Tables 3 and 4."}, {"section_index": "7", "section_name": "4.3 INTROSPECTION", "section_text": "We hypothesize that the fractal structure triggers effects akin to deep supervision and lateral student. teacher information flow. Column #4 joins with column #3 every other layer, and in every fourth layer this join involves no other columns. Once the fractal network partially relies on the signal going through column #3, drop-path puts pressure on column #4 to produce a replacement signal. when column #3 is dropped. This task has constrained scope. A particular drop only requires two. consecutive layers in column #4 to substitute for one in column #3 (a mini student-teacher problem).\nThis explanation of FractalNet dynamics parallels what, in concurrent work, Greff et al. (2017). claim for ResNet. Specifically, Greff et al. (2017) suggest residual networks learn unrolled iterative estimation, with each layer performing a gradual refinement on its input representation. The deepest. FractalNet column could behave in the same manner, with the remainder of the network acting as a. scaffold for building smaller refinement steps by doubling layers from one column to the next.\nTable 4 provides a baseline showing that training of plain deep networks begins to degrade by the time. their depth reaches 40 layers. In our experience, a plain 160-layer completely fails to converge. This. table also highlights the ability to use FractalNet and drop-path as an engine for extracting trained networks (columns) with the same topology as plain networks, but much higher test performance.\nWith Figure 3, we examine the evolution of a 40-layer FractalNet during training. Tracking columns individually (recording their losses when run as stand-alone networks), we observe that the 40-layer column initially improves slowly, but picks up once the loss of the rest of the network begins to stabilize. Contrast with a plain 40-layer network trained alone (dashed blue line), which never makes fast progress. The column has the same initial plateau, but subsequently improves after 25 epochs producing a loss curve uncharacteristic of plain networks.\nThese interpretations appear not to mesh with the conclusions of Veit et al. (2016), who claim tha. ensemble-like behavior underlies the success of ResNet. This is certainly untrue of some very deej. networks, as FractalNet provides a counterexample: we can extract a single column (plain networl topology) and it alone (no ensembling) performs nearly as well as the entire network. Moreover, th gradual refinement view may offer an alternative explanation for the experiments of Veit et al. (2016 If each layer makes only a small modification, removing one may look, to the subsequent portior. of the network, like injecting a small amount of input noise. Perhaps noise tolerance explains th gradual performance degradation that Veit et al. (2016) observe when removing ResNet layers.."}, {"section_index": "8", "section_name": "5 CONCLUSION", "section_text": "With drop-path, regularization of extremely deep fractal networks is intuitive and effective. Drop-path doubles as a method of enforcing speed (latency) vs. accuracy tradeoffs. For applications where fast responses have utility, we can obtain fractal networks whose partial evaluation yields good answers"}, {"section_index": "9", "section_name": "ACKNOWLEDGMENTS", "section_text": "We gratefully acknowledge the support of NVIDIA Corporation with the donation of GPUs used fo. this research."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Jimmy Ba and Rich Caruana. Do deep nets really need to be deep? NIPs, 2014\nDjork-Arne Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (ELUs). ICLR, 2016.\nJia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchica image database. CVPR, 2009\nXavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural network AISTATS, 2010.\nBenjamin Graham. Fractional max-pooling. arXiv:1412.6071. 2014\nKlaus Greff, Rupesh Kumar Srivastava, and Jurgen Schmidhuber. Highway and residual networks learn unrolle iterative estimation. ICLR. 2017.\nBharath Hariharan, Pablo Arbelaez, Ross Girshick, and Jitendra Malik. Hypercolumns for object segmentatiol and fine-grained localization. CVPR, 2015.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-leve. performance on ImageNet classification. ICCV, 2015.\nGeoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Improvin. neural networks by preventing co-adaptation of feature detectors. arXiv:1207.0580, 2012\nOur experiments with fractal networks provide strong evidence that path length is fundamental for training ultra-deep neural networks; residuals are incidental. Key is the shared characteristic of FractalNet and ResNet: large nominal network depth, but effectively shorter paths for gradient propagation during training. Fractal architectures are arguably the simplest means of satisfying this requirement, and match residual networks in experimental performance. Fractal networks are resistant to being too deep; extra depth may slow training, but does not impair accuracy.\nOur analysis connects the internal behavior of fractal networks with phenomena engineered into other . networks. Their substructure resembles hand-crafted modules used as components in prior work Their training evolution may emulate deep supervision and student-teacher learning.\nAlex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009\nChen-Yu Lee, Patrick W Gallagher, and Zhuowen Tu. Generalizing pooling functions in convolutional neural networks: Mixed. gated. and tree. A1STATS. 2016.\nMing Liang and Xiaolin Hu. Recurrent convolutional neural network for object recognition. CVPR, 201\nZhibin Liao and Gustavo Carneiro. Competitive multi-scale convolution. arXiv:1511.05635, 2015\nMin Lin, Qiang Chen, and Shuicheng Yan. Network in network. ICLR, 2013.\nDmytro Mishkin and Jiri Matas. All you need is a good init. ICLR, 2016.\nVinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. ICML, 2010\nBehnam Neyshabur, Ruslan Salakhutdinov, and Nathan Srebro. Path-SGD: Path-normalized optimization in deep neural networks. NIPS, 2015.\nAdriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio Fitnets: Hints for thin deep nets. ICLR, 2015.\nJasper Snoek, Oren Rippel, Kevin Swersky, Ryan Kiros, Nadathur Satish, Narayanan Sundaram, Md Patwary Mostofa Ali, Ryan P Adams, et al. Scalable bayesian optimization using deep neural networks. ICML, 2015\nRupesh Kumar Srivastava, Klaus Greff, and Jurgen Schmidhuber. Highway networks. ICML, 2015\nSasha Targ, Diogo Almeida, and Kevin Lyman. Resnet in resnet: Generalizing residual architectures arXiv:1603.08029, 2016.\nGao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Weinberger. Deep networks with stochastic depth ECCV, 2016b.\nYangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadar- rama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv:1408.5093. 2014.\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. ImageNet classification with deep convolutional neural networks. NIPS, 2012\nMichael Maire, Stella X. Yu, and Pietro Perona. Reconstructive sparse code transfer for contour detection and semantic labeling. ACCV, 2014..\nJost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity The all convolutional net. ICLR (workshop track), 2014.\nChristian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. CVPR, 2015..\nGregor Urban, Krzysztof J. Geras, Samira Ebrahimi Kahou, Ozlem Aslan, Shengjie Wang, Abdelrahman Mohamed, Matthai Philipose, Matt Richardson, and Rich Caruana. Do deep convolutional nets really need to be deep and convolutional? ICLR. 2017.\nAndreas Veit, Michael Wilber, and Serge Belongie. Residual networks behave like ensembles of relativel shallow networks. NIPS, 2016.\nSergey Zagoruyko and Nikos Komodakis. Wide residual networks. BMvC, 2016"}] |
BJ9fZNqle | [{"section_index": "0", "section_name": "ABSTRACT", "section_text": "Recent advances in neural variational inference have facilitated efficient training of powerful directed graphical models with continuous latent variables, such as variational autoencoders. However, these models usually assume simple, uni- modal priors - such as the multivariate Gaussian distribution - yet many real- world data distributions are highly complex and multi-modal. Examples of com- plex and multi-modal distributions range from topics in newswire text to con- versational dialogue responses. When such latent variable models are applied to these domains, the restriction of the simple, uni-modal prior hinders the overall expressivity of the learned model as it cannot possibly capture more complex as- pects of the data distribution. To overcome this critical restriction, we propose a flexible, simple prior distribution which can be learned efficiently and potentially capture an exponential number of modes of a target distribution. We develop the multi-moda1 variational encoder-decoder framework and investigate the effective ness of the proposed prior in several natural language processing modeling tasks including document modeling and dialogue modeling."}, {"section_index": "1", "section_name": "INTRODUCTION", "section_text": "With the development of the variational autoencoding framework (Kingma & Welling2013. Rezende et al.2014), a tremendous amount of progress has been made in learning large-scale. directed latent variable models. This approach has lead to improved performance in applications. ranging from computer vision (Gregor et al.. 2015 Larsen et al.2015) to natural language pro cessing (Mnih & Gregor 2014} Miao et al. 2015 Bowman et al.]2015} Serban et al.]2016bJ Furthermore, these models naturally incorporate a Bayesian modeling perspective, by enabling the. nteoration of problem-denendent knowledge in the form of a prior on the generating distribution\nHowever, the majority of models proposed assume an extremely simple prior in the form of a mul tivariate Gaussian distribution in order to maintain mathematical and computational tractability. A1 though this assumption on the prior has lead to favorable results on several tasks, it is clearly a restrictive and often unrealistic assumption. First, it imposes a strong uni-modal structure on the la tent variable space; latent samples from the generating model (prior distribution) all cluster around a single mean. Second, it encourages local smoothness on the latent variables; the similarity be tween two latent variables decreases exponentially as their distance increase. Thus, for complex. multi-modal distributions - such as the distribution over topics in a text corpus, or natural language. responses in a dialogue system - the uni-modal Gaussian prior inhibits the model's ability to ex tract and represent important structure in the data. To learn more powerful and expressive models -- in particular, models with multi-modal latent variable structures for natural language processing. applications - we seek a suitable and flexible prior than can be automatically adapted to mode. multiple modes of a target distribution.\nFirst two authors contributed equally\nIn this paper, we propose the multi-modal variational encoder-decoder framework, introducing an efficient, flexible prior distribution that is suitable for distributions such as those found in natural language text. We demonstrate the effectiveness of our multi-modal variational architectures in two representative tasks: document modeling and dialogue modeling. We find that our prior is able tc capture elements of a target distribution that simpler priors - such as the uni-modal Gaussian cannot model, thus allowing neural latent variable models to extract richer structure from data. In particular, we achieve state-of-the-art results on several document modeling tasks.\nWith respect to dialogue modeling, latent variable models were investigated byBangalore et al. (2008),Crook et al.(2009) as well as others. More recently,Zhai & Williams(2014) have proposed three models combining hidden Markov models and topic models. The success of these discrete latent variable models also motivates our investigation into dialogue models with multi-modal la- tent variables. Most related to our work is the Variational Hierarchical Recurrent Encoder-Decoder (VHRED) model by Serban et al.(2016b), which is a neural architecture with latent multivariate Gaussian variables. This model will be described later.\nThere has been some work exploring alternative distributions for the latent variables in the varia tional autoencoder framework, including multi-modal distributions. Rezende & Mohamed(2015 propose an approach called normalizing flows which computes a more complex, potentially multi modal distribution, by projecting standard Gaussian variables through a sequence of non-lineai transformations. This approach is similar to the inverse auto-regressive flow proposed by Kingma et al.(2016). Unfortunately, both normalizing flows and auto-regressive flow are only applicable to the approximate posterior distribution; typically these approaches require fixing the prior distri\nThe idea of using an artificial neural network to approximate an inference model dates back to the. 90s (Hinton & Zemel]1994]Hinton et al.]1995] Dayan & Hinton]1996). However, initial attempts at such an approach were hindered by the lack of low-bias, low-variance estimators of parame- ter gradients. Traditionally, researchers resorted to Markov chain Monte Carlo methods (MCMC) (Neall [1992) which do not scale well and mix slowly, or to variational approaches which require a. tractable, factored distribution to approximate the true posterior distribution, usually under-fitting it (Jordan et al.1999). Others have since proposed using feed-forward inference models to effi-. ciently initialize the mean-field inference algorithm for incrementally training Boltzmann architec-. tures (Salakhutdinov & Larochelle]2010, [Ororbia II et al.]2015b). However, these approaches are limited by the mean-field inference's inability to model structured posteriors. Recently, Mnih &. Gregor(2014) proposed the neural variational inference and learning (NVIL) approach to match the. true posterior directly without resorting to approximate inference. NVIL allows for the joint training. of an inference network and directed generative model, maximizing a variational lower-bound on. the data log-likelihood and facilitating exact sampling of the variational posterior. Simultaneously with this work, the variational autoencoder framework was proposed by[Kingma & Welling(2013] and Rezende et al.(2014). This framework is the motivation of this paper, and will be discussed in detail in the next section.\nWith respect to document modeling, it has recently been demonstrated that neural architectures can outperform well-established, standard topic models such as Latent Dirichlet Allocation (LDA) (Blei et al.|2003). For example, it has been demonstrated that models based on the Boltzmann ma- chine, which learn semantic binary vectors (binary latent variables), perform very well (Hofmann 1999). Work involving discrete latent variables include the constrained Poisson model (Salakhut- dinov & Hinton,2009), the Replicated Softmax model (Hinton & Salakhutdinov2009) and the Over-Replicated Softmax model (Srivastava et al.]2013), as well as similar, auto-regressive neural architectures and deep directed graphical models (Larochelle & Lauly]2012]Uria et al.|[2014]Lauly et al.2016] Bornschein & Bengio] 2014). In particular, Mnih & Gregor(2014) showed that using NVIL yields better generative models of documents than these previous approaches. The success of these discrete latent variable models - which are able to partition probability mass into separate regions - serve as the main motivation for investigating models with continuous multi-modal la- tent variables for document modeling. More recently,Miao et al.(2015) have proposed continuous latent variable representations for document modeling, which has achieved state-of-the-art results. This model will be described later.\nbution to a uni-modal multivariate Gaussian. Furthermore, to the best of our knowledge, neither o. these approaches have been investigated in the context of larger scale text processing tasks, such as the document modeling and dialogue modeling tasks we evaluate on. A complementary approach is. to combine variational inference with MCMC sampling (Salimans et al.2015] Burda et al. 2015 however this is computationally expensive and therefore difficult to scale up to many real-world. tasks. Enriching the latent variable distributions has also been investigated by[Maalge et al.(2016).\nMixture of Gaussians Perhaps the most direct and naive approach to learning multi-modal latent. variables is to parametrize the latent variable prior and approximate posterior distributions as a. mixture of Gaussians. However, the KL divergence between two mixtures of Gaussian distributions cannot be computed in closed form (Durrieu et al.|2012). To train such a model, one would have. to either resort to MCMC sampling, which may slow down and hurt the training process due to the. high variance it incurs, or resort to approximations of the KL divergence, which may also hurt the. training process1\nDeep Directed Models An alternative to a mixture of Gaussians parametrization is to construct. a deep directed graphical model composed of multiple layers of uni-modal latent variables (e.g. multivariate Gaussians) (Rezende et al.]2014). Such models have the potential to capture highly. complex, multi-modal latent variable representations through the marginal distribution of the top-. layer latent variables. However, this approaches has two major drawbacks. First, the variance of the. gradient estimator grows with the number of layers. This makes it difficult to learn highly multi-. modal latent representations. Second, it is not clear how many modes such models can represent or how their inductive biases will affect their performance on tasks containing multi-modal latent. structure. The piecewise constant latent variables we propose do not suffer from either of these two. drawbacks; the piecewise constant variables incur low variance in the gradient estimator, and can,. in principle, represent a number of modes exponential in the number of latent variables..\nDiscrete Latent Variables A third approach for learning multi-modal latent representations is to in. stead use discrete latent variables as discussed above. For example, the learning procedure proposed. by Mnih & Gregor(2014) for discrete latent variables can easily be combined with the variational. autoencoder framework to learn models with both discrete and continuous latent variables. How. ever, the major drawback of discrete latent variables is the high variance in the gradient estimator Without further approximations, it might be difficult to scale up models with discrete latent variables. for real-world tasks."}, {"section_index": "2", "section_name": "3 THE MULTI-MODAL VARIATIONAL ENCODER-DECODER FRAMEWORK", "section_text": "We start by describing the general neural variational learning framework. Then we present our pro. posed prior model aimed at enhancing the model's ability to learn multiple modes of data distribu tions. We focus on modeling discrete output variables in the context of natural language processing. applications. However, the framework can easily be adapted to handle continuous output variables such as images, video and audio."}, {"section_index": "3", "section_name": "3.1 NEURAL VARIATIONAL LEARNING", "section_text": "Let w1,..., wy be a sequence of N words conditioned on a continuous latent variable z. In the general framework, the distribution over the variables follows the directed graphical model:\nwhere 0 are the model parameters. The model first generates the higher-level, continuous latent variable z, and then, conditioned on this, generates the word sequence. The document modeling\n' Our lab has previously investigated incorporating mixture of Gaussian models into the autoencoder frame. work, but without any success. This work has not been published..\nN Pe(wi,...,wn,z) Pe(wn|w<n,z)Pe(z)dz n=1\ntask further simplifies the model by assuming the words are independent of each other.\nThe majority of work on VAEs that uses the re-parametrization trick propose to parametrize z - both the prior and approximate posterior (encoder) -- as a multivariate Gaussian variable. However the multivariate Gaussian is a uni-modal distribution and can therefore only represent one mode in latent space. This means the mapping from latent variable to outputs - i.e. the conditiona distribution Pe(wn|z) - has to be highly non-linear in order to capture additional modes. However in general, it is difficult to learn such non-linear mappings with existing stochastic optimizatior methods, such as mini-batch stochastic gradient descent and its variants. Learning such a non-linea mapping is particularly difficult using the variational bound in eq. (3), because it incurs additiona variance from sampling the latent variable z. Consequently, such a model is very likely to converge on a solution which does not model multi-modality which then leads to a poor approximation of the output distribution."}, {"section_index": "4", "section_name": "3.2 THE PIECEWISE-CONSTANT PRIOR FOR LATENT VARIABLES", "section_text": "In this work, we overcome the uni-modal restriction by parametrizing z using a piecewise constant. probability density function (PDF). This parametrization will allow z to represent complex aspects of the data distribution in latent variable space, such as multiple modes and highly non-smooth regions of probability mass. From a manifold learning perspective, this extension translates into expanding the set of manifolds representable by the model parameters to include more non-linear manifolds in particular, manifolds where there exists separate clusters of probability mass..\nLet n E N be the number of piecewise constant components. We assume z is drawn from the PDF:\n1 n P(z ai7 K i=1 n n\nn K = ai Ki where Ko := 0, K, :=- fori=1,...,n n i=1\nn 1 ai K n i=1 n n n\nN 11 Pe(wi,..., Wn, z Pe(wn|z)Pe(z)dz n=1\nz = fe(e)\nwhere e is drawn from a random distribution, e.g. a standard Gaussian distribution (with zero mean and unit standard deviation) or a uniform distribution in the interval [0, 1], and f is some transfor mation of this variable. also parametrized by 0\nwhere 1() is the indicator function (which is one whenever x is true and otherwise zero), a; > 0 for. i = 1, ..., n are the distribution parameters (which will be learned during training), and K is the normalization constant:\nNext. we derive its inverse:\nIn addition to sampling, we need to compute the Kullback-Leibler (KL) divergence between the prior and approximate posterior distributions of the piecewise constant variables. We assume both the prior and the posterior are piecewise constant distributions. We use prior to denote prior parameters and post to denote posterior parameters (encoder model parameters). The KL divergence betweer the prior and posterior can be computed using a sum of integrals, where each integral inside the sum corresponds to one constant segment:\nQw(z|W1,...,WN) LQw(z|w1,...,Wn)]Pe(z)] . . , wn) lo Pe(z) post Kpost a log dz Kpost pr1o K prior\nWN [w1,...,wn)|[Pe(z) WN Pe(z post pos Kpost a log dz Kpost prioi Kprior 0 .- n ,post post a DOS log Kpost n prioi Kprior i=1 n 1 1 post a n Kpost i=1 + log(Kprior) log(Kpost )\nIn order to train the model, we take partial derivatives of the variational bound in eq. (3) w.r.t each parameter in 0 and . These expressions involve derivatives of the indicator functions, which. have derivatives zero everywhere except for the changing points where the derivative is undefined.. However, the probability of sampling e such that an indicator function is exactly at its changing. point is effectively zero. Therefore, we fix their derivatives to zero| A similar approach is used for. training neural networks with rectified linear units. Figure|1|illustrates how the piecewise constant latent variables can work with Gaussian latent variables in order to model multi-modality.."}, {"section_index": "5", "section_name": "LATENT VARIABLE PARAMETRIZATIONS", "section_text": "The latent variable parametrizations are crucial to modeling the data effectively. In this section, we will develop the parametrizations for both the Gaussian variable and our proposed piecewise laten Variable.\nFor all parametrizations, let c be the conditioning information for the prior. In document modeling. there is no conditioning information available to the prior, so c = 0. In dialogue modeling c i. the vector representation of the dialogue context, namely all previous utterances until the curren time step. Let x be the current output sequence (observation), which the model must generate (e.g. W1 , : . . , w y for document modeling)."}, {"section_index": "6", "section_name": "4.1 GAUSSIAN PARAMETRIZATION", "section_text": "Let prior and o2,prior be the prior mean and variance, and let post and o2,post be the posterior mean and variance. For Gaussian latent variables, the prior distribution mean and variances are encoded using linear transformations of a hidden state. In particular, the prior distribution covariance is\n2we thank Christian A. Naesseth for pointing out this assumption\nn i-1 K 1 Kj E ai K n i=1 Kj<e j=0 K\nz =-e) where e ~ Uniform(0, 1)\nQw(z|w1,..., Wn) dz (10) WA Pe(z) post post Kpost a log dz (11) Kpost r1 prio (12) n 1 .post log (13) n K post pro prio i=1 n 1 1 post (14) a 10 n Kpost i=1 (Kprior) DOS\nFigure 1: The horizontal axis corresponds to z1, which is a univariate Gaussian variable. The vertical. axis corresponds to z2, which is a piecewise constant variable. The PDF for each variable is shown along each axis. and their joint distribution is illustrated in grey color.\nencoded as a diagonal covariance matrix using a softplus function\n2,prior = diag(log(1 + exp(HpriorEnc(c) + bprior)))\nwhere Enc(c) is an embedding/encoding of the context c (e.g. given by a bag-of-words encoder or an LSTM encoder applied to c), which is shared across all latent variable dimensions. The parameters. are to be learned.\nFor the posterior distribution, our preliminary experiments have shown that it is much better to. parametrize the posterior distribution by interpolating between the prior distribution mean and vari-. ance and a new estimate of the mean and variance. This interpolation is controlled by a gating. mechanism, which makes it easy for the model to learn how to turn on/off latent dimensions:\nupost = (1 Q (HpostEnc(c, x) + bpost) 2,post = 2,prior + diag(log(1 + exp(HpostEnc(c,x) + (1 - Q\nSimilar to the Gaussian variances, we propose to parametrize the piecewise constant prior parameters. using an exponential function applied to a linear transformation of the context embedding/encoding.\nprior ai =1 ...n a 1.\nare the parameters to be learned.\nWe may also constrain the piecewise constant posterior parameters to be an interpolation betwee the prior parameters and a new estimated parameter:\npost + Qa,i exp(HpostEnc(c, x) + bpost\nZ2 Z1\nwhere Enc(c,x) is an encoding/embedding of both c and x, and where the parameters are Hpost, bpost, Hpost, bpost ost, Q, Qg. The interpolation mechanism is controlled by Q and Qg, which are initialized to zero (i.e. initialized such that the posterior is equal to the prior)|\n3We experimented with more sophisticated mechanisms for controlling the gating variables, including defin. ing Q and ao to be a linear function of the encoder. However, we found that simpler was often better and thus do not report these results using more advanced mechanisms.\nWe now present two probabilistic models, the NVDM and the VHRED, which are extended t incorporate the latent variable parametrization and used for the document modeling and the dialogu modeling experiments described below..\nThe NVDM framework (Mnih & Gregor 2014] Miao et al.]2015) collapses the recurrent neural encoder into a simpler bag-of-words model (since no symbol order is taken into account), which may be defined as a multi-layer perceptron (MLP) for Enc(c = 0,x) = Enc(x). Let V be the vocabulary. Let W represent a document matrix, where row w; is the 1-of-|V binary encoding of the i'th word in the document. Enc(W) is trained to compress a document vector into a continuous distributed representation upon which the posterior model is built.\nThe NVDM parametrization requires only learning the parameters bprior, wpost, bpost for the piece wise variables, and learning the parameters bprior, prior, wpost, post, wpost, post for the Gaussian vari ables. We initialize the bias parameters to zero, in order for the NVDM to start with a centered Gaus sian prior. This prior will be adapted by the parametric encoder as learning progresses, while als learning to turn on/off latent dimensions controlled through the gating mechanism. It is important t note that our particular instantiation of the NVDM is different from that of|Mnih & Gregor(2014 and Miao et al.(2015); we jointly learn the prior mean and variance whereas in previous work it ha been assumed to be a standard Gaussian. Furthermore, our models learn to interpolate between th generated prior and posterior models to calculate a new posterior.\nBased on preliminary experiments, we choose the encoder to be a 2-hidden layer perceptron, definec. by parameters {Eo, bo, E1, b1}. The decoder is defined by parameters {R, c}. For example, in the. case of the hybrid VAE we use eq. (15)-(20) to generate the distribution parameters. In this case. to draw a sample from the Gaussian prior, we draw a standard Gaussian variable and then multiply it by the standard deviation and add the mean of the Gaussian prior. To draw a sample from the piecewise prior, we use eq. (8). As such, the complete architecture is:.\nEnc(W) = f1(E'(W)+b1) ZGaussian = Po 2,post EO -1,post ZPiecewise = z = (ZGaussian, ZPiecewise) Dec(w,z) =g(-w'Rz),\nwhere is the Hadamard product, (o, o) is an operator that combines the Gaussian and the Piecewise variables and Dec(w, z) is the decoder model.4 As a result of using the re-parametrization trick and choice of prior, we calculate the latent variable z through the two samples, eo and e1. f(o) is a non-linear activation function. We choose it to be the softsign function, or f(v) = v/(1 + v[). The decoder model Dec(z) outputs a probability distribution over words conditioned on z. In this case, we define g(o) as the softmax function (omitting the bias term c for clarity) computed as:\n4Operations include vector concatenation, summation, or averaging\nTo take advantage of the properties of both priors, the Gaussian and piecewise constant variables may be combined, as was suggested in Section 3.2 In this work, we primarily experimented with their concatenation to create a hybrid model.\nexp(-wRz)\nThe decoder's output is used to calculate the first term in the variational lower-bound: log Pe(W|z. The prior and posterior distributions are used to compute the KL term in the variational lower-bound The lower-bound defined becomes:.\nN L = EQy(z|W) 1og Pe(wi|z) KL[Qw(z|W)]Pe(z)] i=1\nwhere the KL term is the sum of the Gaussian and piecewise KL-divergence measures:\nKL[Q(z|W)||P(z)] = KLGaussian[Q(z|W)||P(z)]+ KLPiecewise[Q(z|W)||P(z)]\nThe KL-terms may be interpreted as regularizers of the parameter updates for the encoder mode (Kingma & Welling2013). These terms encourage the posterior distributions to be similar to thei corresponding prior distributions, by limiting the amount of information the encoder model trans mits regarding the output. For example, it encourages the uni-modal Gaussian posterior to move it mean close to the mean of the Gaussian prior, which makes it difficult for the Gaussian posterio to represent different modes conditioned on the observation. Similarly, this encourages the piece wise constant posterior to be similar to the piecewise constant prior. However, since the piecewis constant posterior is multi-modal, it may be able to shift some of its probability mass towards th prior distribution while keeping other probability mass on one or several modes dependent upo the output observation (e.g. if the prior distribution is a uniform distribution and the true posteric concentrates all its probability mass in several small regions, then the approximate posterior coul interpolate between the prior and the true posterior)."}, {"section_index": "7", "section_name": "5.2 VARIATIONAL HIERARCHICAL RECURRENT ENCODER-DECODER (VHRED)", "section_text": "The VHRED model is an extension of the hierarchical recurrent encoder-decoder model (HRED for dialogue (Serban et al.]2016b a). The model decomposes dialogues using a two-level hierarchy sequences of utterances (e.g. sentences), and sub-sequences of tokens (words). Let wn be the n'th utterance in a dialogue with N utterances. Let wn,m be the m'th word in the n'th utterance from vocabulary V, and let M, be the number of words in the n'th utterance. In addition to this, VHRED has a latent multivariate continuous variable zn for each utterance n = 1, ..., N. The probability distribution of the generative model factorizes as:\nN Pe(wi,...,wn) Po(wn|w<n,Zn)Pe(zn|w<n)z n=1 N Mn II II Pa(wn,m|Wn,<m,W<n,zn)Pa(zn|w<n); n=1 m=1\nwhere 0 are the model parameters. VHRED uses three RNN modules: an encoder RNN, a contex RNN and a decoder RNN. First. each utterance is encoded into a vector by the encoder RNN:\nenc 0 h enc 'enc hn,m-1, Wn,m) Vm = 1,..., Mn b enc n.0\n= 0. h con - econ/ con enc\nwhere fprior is a PDF parametrized by both 0 and hcon. Next, a sample is drawn from this distribu tion: zn ~ Pe(zn|w<n). The sample and context state are given as input to the decoder RNN:\nde 1e( de con Wn.m Vm=1.... Mn\nwhere fenc is either a GRU or a bidirectional GRU function. The last hidden state of the encoder RNN is given as input to the context RNN. Then, the context RNN updates its internal hidden state to reflect all the information up until that utterance:\nwhere fcon is a GRU function taking as input two vectors. This state is used to compute the prior distribution over the latent variable zn:\nPe(znw<n) ior (hcon)\nwhere fdec is the LSTM gating function taking as input four vectors. The output distribution is an affine transformation and a softmax function:. n.m\nOwn,m+1)'fmlp(hdecm. n.m Pg(Wn,m+1|Wn,<m,W<n,Zn nlp (hdec. .TI\nN logPe(w1,...,wn) -KL[Qw(znw1,...,wn)||Pe(znw<n) n=1\n(hcon1, hen,Mm Qy(zn|w<n) = fPos\nThe original VHRED model as described by Serban et al.(2016b) used only Gaussian latent vari-. ables. We will refer to this model as Gaussian-VHRED (G-VHRED). The VHRED model with both Gaussian and piecewise constant latent variables will be referred to as Hybrid-VHRED (H VHRED). In this case, we combine the Gaussian and piecewise latent variables by concatenating. them into one vector\nIn order to validate the ability of our piecewise latent variables to capture complex aspects of data distributions, we conduct experiments with both the NVDM and VHRED models..\nAll models are trained using back-propagation to obtain parameter gradients with respect to the. variational lower-bound on the log-likelihood or the exact log-likelihood. We used a standard first order gradient-descent optimizer, Adam (Kingma & Ba]2015), for both models, where only hyper. parameter choices varied depending on the task. The specifics of the design of the encoder and. decoder differed between the two tasks (as described in Sections |5.1|and |5.2). For all models that. used piecewise latent variables, we chose to fix da, = 1, meaning the piecewise prior and poste. rior models are kept separate (instead of having the posterior be an interpolation between anothei. distribution and the prior), since we found this to perform bette[6."}, {"section_index": "8", "section_name": "6.1 DOCUMENT MODELING", "section_text": "For our experiments in document modeling, we make use of the 20 News-Groups dataset. We follow the pre-processing and set-up of Hinton & Salakhutdinov(2009). In addition, we make use. of the Reuters corpus (RCV1-V2), using a version that contained a selected 5,000 term vocabulary.. Note that the features are a log(1 + TF) transform of the original frequency vectors. To test our. document models on text from another language (in this case, Brazilian Portuguese), we make use of. the CADE12 dataset (stop-word removed and stemmed)Cardoso-Cachopo(2007), where we further filtered terms that occurred less than 130 times to obtain a vocabulary of 3,736 terms (over 26,991 training and 13,486 test documents). For all datasets, we track the validation bound on a subset of 100 vectors randomly drawn from each training corpus..\nBefore concatenation, we transform the piecewise constant latent variables to lie within the interval. z' = 2z - 1. This ensures the input to the decoder RNN has mean zero at the beginning of training. 6We believe that if da, = 0 for a long period of time, then the posterior receives no gradient signal. Without. a gradient signal, the estimated posterior becomes increasingly disconnected from the rest of the model and. thus, less effective. This might be due to the choice of non-linearities, which affect the piecewise latent variables moreso than the Gaussian latent variables.\n7we will make the code and scripts used to create the final document input vectors and vocabulary file publicly available upon publication.\nwhere O E R|V|d is the word embedding matrix for the output distribution with embedding dimen- sionality d E N. The model is trained by maximizing the variational lower-bound, which factorizes into independent terms for each sub-sequence (utterance):\nwhere distribution Q is the approximate posterior distribution with parameters , which is com- puted similar to the prior distribution but further conditioned on the future encoder RNN hidden State:\n20-NG Sampled SGD-Inf RCV1 Sampled SGD-Inf LDA 1058 G-NVDM 905 837 RSM 953 H-NVDM-3 865 807 docNADE 896 H-NVDM-5 833 781 SBN 909 fDARN 917 CADE Sampled SGD-Inf NVDM 836 G-NVDM 339 230 G-NVDM 651 588 H-NVDM-3 258 193 H-NVDM-3 607 546 H-NVDM-5 294 209 H-NVDM-5 566 496\nTable 1: Comparative test perplexities on various document datasets (50 latent variables). Note that document probabilities were calculated using 10 samples to estimate the variational lower bound\nFor the Gaussian NVDM (G-NVDM), we constrain the interpolated posterior variance to lie in the range of [0.01, 10.0]. For the hybrid NVDMs (H-NVDM) 8] we vary the number of components usec in the PDF, investigating the effect that 3 and 5 pieces had on the final quality of the model. Pa rameter updates for all models were estimated using mini-batches of 100 samples drawn randomly without replacement from the training data over 150 epochs. A learning rate of 0.002 was used Model selection and early stopping (the only additional form of regularization employed for this se of experiments) were conducted using the validation lower-bound, estimated using five stochastic samples per validation example. We rescale large gradients by their norm (Pascanu et al.|2012) Inference networks made use of 50 units in each hidden layer for 20 News-Groups and CADE anc 100 for RCV1, while all performed best with 50 latent variables (chosen via preliminary exper imentation with smaller models). On the 20 News-Groups, since we were able to use the sam set-up (especially vocabulary) asHinton & Salakhutdinov(2009), we also report the perplexities of a topic model (LDA,Hinton & Salakhutdinov(2009), the Replicated Softmax (RsM, Hintor & Salakhutdinov(2009)), the document neural auto-regressive estimator (docNADE,Larochelle & Lauly[(2012)), a sigmoid belief network (SBN,Mnih & Gregor (2014)), a deep auto-regressive neu ral network (fDARN,Mnih & Gregor (2014)), and a neural variational document model with a fixec standard Gaussian prior (NVDM, lowest reported perplexity,Miao et al.(2015)).\n8We ultimately found that averaging the variables, as opposed to using concatenation, yielded best perplexity and thus report these results.\nNG Sampled SGD-Inf RCV1 Sampled SGD-Inf 1 1058 G-NVDM 905 837 953 H-NVDM-3 865 807 NADE 896 H-NVDM-5 833 781 909 RN 917 CADE Sampled SGD-Inf DM 836 G-NVDM 339 230 IVDM 651 588 H-NVDM-3 258 193 IVDM-3 607 546 H-NVDM-5 294 209 IVDM-5 566 496\nG-NVDM H-NVDM-3 H-NVDM-5 G-NVDM H-NVDM-3 H-NVDM-5 governments citizens arms environment project science citizens rights rights project gov built country governments federal flight major high threat civil country lab based technology private freedom policy mission earth world rights legitimate administration launch include form individuals constitution protect field science scale military private working private nasa sun freedom court citizens build systems special foreign states military gov technical area\nTable 2: Word query similarity test, where each (20 News-Group) document model's decoder is given a query and must return the top 10 most relevant words. The first query was \"government' while the second was \"space\"'. It appears that the models with piecewise variables tend to associate more general/abstract terms to the query, which may or may not always be what is desired.\nIn Table [1] we report the test document perplexity (under the Sampled column), calculated using a particular document, was approximated with an estimate of the variational lower-bound using 10 samples, as was done in Mnih & Gregor (2014). The second score (or column SGD-Inf), refers to the model's test-perplexity when the lower-bound is tightened using iterative inference to search for the optimal latent variable per document. In this paper, our iterative inference procedure consisted of simple stochastic gradient descent (no more than 100 steps), with a learning rate of 0.1 and the same\ngradient rescaling used in training, using early-stopping (for 20 News-Groups, the lookahead was. 10, while on Reuters and CADE the lookahead was 5). The parameters of the model, as well as the well as the generated prior, are fixed, and the gradients of the variational lower bound with respec to generated posterior model parameters (i.e., the mean and variance of the Gaussian variables, anc. the piecewise components, a) are used to update the posterior model for each document (using :. freshly drawn sample each step).\nFirst and foremost, we note that the best baseline model (i.e., the NVDM) is more competitive. when both the prior and posterior models are learnt together (i.e., the G-NVDM), as opposed to. the fixed prior of Miao et al.(2015). However, we observe that integrating our proposed piecewise. variables yields even better results in our document modeling experiments, substantially improving. over the baselines. More importantly, in some cases, as in the 20 News-Groups and Reuters datasets,. increasing the number of pieces from 3 to 5 can further reduce perplexity. Thus, we have achieved a new state-of-the-art perplexity on 20 News-Group task and - to the best of our knowledge - better perplexities on the CADE12 and RCV1 tasks compared to using a state-of-the-art model like the G- NVDM. Furthermore, we observe iterative inference yields yet a further boost in performance since. the bound estimated is tighter, however, this form of inference is expensive and requires additional. meta-parameters (e.g., a step-size, an early-stopping criterion, etc.). We remark a simpler, and more. accurate, approach to inference would be to use importance sampling..\nIn Table [2] we examine the top ten highest ranked words given a query term, using the decode parameter matrix (since the decoder is directly affected by the latent variables in our documen. models). It appears that the piecewise variables affect what is uncovered by the model with respeci. to the data, as each model returns different, but relevant results with respect to the query word. Ir. our current examples, it appears that the H-NVDM with 5 pieces returns more general words. For. example, in the case of \"government\"', the baseline seems to value the plural form of the word (which. is largely based on morphology) while the hybrid model actually pulls out meaningful terms sucl as \"federal\", \"policy', and \"administration\". The case of \"space'\" is interesting-the hybrid with 5. pieces seems to value two senses of the word-one related to \"outer space' (e.g., \"sun\", \"world\". etc.) and another related to the dimensions of depth, height, and width within which things may. exist and move (e.g., \"area\", \"form\", \"scale\", etc.).\nWe experiment with VHRED for dialogue modeling. This is a difficult problem, extensively studie in the recent literature (Ritter et al.]2011} Lowe et al. I2015 Sordoni et al. 2015 Li et al.| 2016 Serban et al.] 2016a). Related systems for dialogue response generation have recently gained a sig nificant amount of attention from industry, with high-profile projects such as Google's SmartRepl system (Kannan et al.]2016) and Microsoft's chatbot Xiaolice (Markoff & Mozur2015). Eve more recently, Amazon has announced the Alexa Prize Challenge for the research community wit the goal of developing a natural and engaging chatbot system (Farber 2016).\nWe focus on non-goal-driven dialogue modeling and use the Twitter Dialogue Corpus (Ritter et al.. 2011) based on public Twitter conversations. The dataset is split into training, validation, and test. sets, containing respectively 749,060, 93,633 and 9,399 dialogues each. On average, each dia-. logue contains about 6 utterances (dialogue turns) and about 94 words. The dataset is the same as used by Serban et al.(2016b), but further pre-processed using byte-pair encoding (Sennrich et al.. 2016) using a vocabulary consisting of 5000 sub-words| The dialogues are substantially longer. than recent large-scale language modeling corpora, such as the 1 Billion Word Language Model. Benchmark (Chelba et al.]2014), which usually focus on modeling single sentences..\nParameter optimization was conducted with a learning rate of 0.0002 and mini-batches of size 4( or 80|10|we use a variant of truncated back-propagation and apply gradient clipping (Pascanu et al.. 2012). Model selection and early stopping - the only additional form of regularization employec. for this set of experiments - are conducted using the validation lower-bound, estimated using one. stochastic sample per validation example..\n9In addition to applying byte-pair encoding, we filtered out 601 test dialogues so that no test dialogue context overlapped with the training or validation sets. 10we had to. va i-batch size to make the tr GPIJ archite with low\nSimilar to Serban et al.(2016b), we use a bidirectional GRU RNN encoder, where the forward an oackward RNNs each have 1000 hidden units. We experiment with context RNN encoders witl 500 and 1000 hidden units, and find that that 1000 hidden units reach better performance w.r.t. the variational lower-bound on the validation set. The encoder and context RNNs use layer normaliza tion (Ba et al.]2016). We experiment with decoder RNNs with 1000, 2000 and 4000 hidden units LSTM cells), and find that 2000 hidden units reach better performance. For the G-VHRED model we experiment with latent multivariate Gaussian variables with 100 and 300 dimensions, and finc that 100 dimensions reach better performance. For the H-VHRED model, we experiment with laten multivariate Gaussian and piecewise constant variables each with 100 and 300 dimensions, and finc that 100 dimensions reach better performance. We follow the training procedure of Serban et al 2016b): we drop words in the decoder with a fixed drop rate of 25% and multiply the KL terms ir the variational lower-bound by a scalar, which starts at zero and linearly increases to 1 over the firs 60,000 training batches.\nWe also experiment with an LSTM baseline model and a HRED baseline model (Serban et al.. 2016a). For the LSTM model, we experiment with number of hidden units (LSTM cells) equa. to 1000, 2000 and 4000 and find that 4000 hidden units perform best w.r.t. validation perplextiy For the HRED model, we use the same encoder and context RNN architectures as the G-VHRED and H-VHRED models described earlier. We set the encoder RNN to have 1000 hidden units. We experiment with a context RNN with 500 and 1000 hidden units, and find that 1000 hidden units reach better performance. For the decoder RNN, we experiment with 1000 and 2000 hidden units (LSTM cells) and find that 2000 hidden units perform better..\nApproximate Posterior Analysis Our hypothesis is that the piecewise constant latent variables. are able to capture multi-modal aspects of the dialogue. Therefore, we evaluate the models by analyzing what information they have learned to represent in the latent variables. For each test dialogue with n utterances, we condition each model on the first n - 1 utterances and compute the latent posterior distributions using all n utterances. We then compute the gradients of the KL terms. of the multivariate Gaussian and piecewise constant latent variables w.r.t. each word in the dialogue Since the words vectors are discrete, we compute the sum of the squared gradients w.r.t. each word embedding. The higher the sum of the squared gradients of a word is, the more influence it will have.\nWord G-VHRED H-VHRED Word G-VHRED H-VHRED Time-related G-KL G-KL P-KL Event-related G-KL G-KL P-KL monday 3 5 10 school 9 16 50 tuesday 2 3 7 class 11 16 27 wednesday 4 11 13 game 20 26 41 thursday 2 3 9 movie 12 20 41 friday 9 18 26 club 13 22 28 saturday 6 6 13 party 8 10 32 sunday 2 2 9 wedding 7 13 23 weekend 8 16 32 birthday 12 20 23 today 18 28 56 easter 15 15 23 night 16 31 68 concert 7 16 20 tonight 32 36 47 dance 11 12 21 Word G-VHRED H-VHRED Word G-VHRED H-VHRED Sentiment Acronyms, Punctuation G-KL G-KL P-KL G-KL G-KL P-KL -related Marks & Emoticons good 72 73 44 1o1 394 358 312 love 102 101 38 omg 52 45 19 26 44 39 386 558 1009 awesome : cool 14 28 29 ! 648 951 525 haha 132 101 75 ? 507 851 221 hahaha 60 48 24 * 108 54 19 amazing 14 38 33 xd 28 42 26 thank 137 153 29 56 42 24\nTable 3: Approximate posterior word encoding on Twitter. The numbers are computed by counting the number of times each word is among the 5 words with the largest sum of squared gradients of the Gaussian KL divergence (G-KL) and piecewise constant KL divergence (P-KL).\non the posterior approximation (encoder model). For every test dialogue, we count the top 5 words with highest squared gradients separately for the multivariate Gaussian and piecewise constant laten Variables 11\nThe results are shown in Table[3] The piecewise constant latent variables clearly capture differen aspects of the dialogue compared to the Gaussian latent variables. The piecewise constant variabl approximate posterior encodes words related to time (e.g. weekdays and times of day) and event (e.g. parties, concerts, Easter). On the other hand, the Gaussian variable approximate posterior en codes words related to sentiment (e.g. laughter and appreciation) and acronyms, punctuation mark and emoticons (i.e. smilies). We also conduct a similar analysis on the document models evaluate in Sub-section|6.1] the results of which may be found in the Appendix.\nResponse Evaluation Non-goal-driven dialogue models are typically evaluated by asking humans to rate the quality of different responses. We follow the approach byLiu et al. (2016) by conducting an Amazon Mechanical Turk experiment to compare the G-VHRED and H-VHRED models. For each test dialogue, we use TF-IDF to extract 100 candidate responses (Lowe et al.]2015). We then rank. the responses according to the G-VHRED model and H-VHRED model using the variational lower-. bound!12| we ask three human evaluators to rate model responses for 45 dialogues on a Likert-type. scale 1 5, with 1 representing an inappropriate response and 5 representing a highly appropriate. response.13|For each dialogue, we show the human evaluators the top two responses ranked by the. G-VHRED and H-VHRED models. We choose to evaluate the re-ranked responses for two reasons. First, it reduces variance in the output because it uses the approximate posterior model, compared to using beam search with samples from the high-entropy prior. Second, it decreases the number. of generic responses, which are extremely common among generative models and which human. evaluators tend to prefer despite not advancing the dialogue (Li et al.]2016).\nThe results are as follows. The G-VHRED model achieves scores 1.88 and 2.13 for the first anc second ranked responses on average, and the H-VHRED model achieves scores 1.93 and 2.04 on. average. In other words, H-VHRED performs nominally better on the first ranked response while. G-VHRED performs nominally better on the second ranked response. In conclusion, if there exists. a difference between the two models, naive human evaluators cannot see it..\nAlthough naive human evaluators cannot distinguish between the model responses, based on oui. previous analysis we know that the two models encode different aspects of dialogue conversations. Therefore, we further investigate the probability of different responses to dialogue contexts related to time and events. Two examples are shown in Figure [2] where the dialogue contexts are \"when do you want to meet this weekend?\" and \"where are you going tomorrow?\". H-VHRED assigns. substantially more probability mass to relevant words compared to the G-VHRED as well as an LSTM baseline and HRED baseline. This confirms the ability of the piecewise constant latent. variable to generate responses related to time and events..\nFinally, we also evaluate the diversity of the G-VHRED and H-VHRED model outputs w.r.t. the top ranked FF-IDF candidate responses. We measure the average word entropy (Serban et al.]2016b) as well as number of unique words for each response and unique words across all test responses, but did not find a significant difference between the two models. This indicates that the Gaussian latent variables alone are able to increase response diversity, while the piecewise constant latent variables instead help encode specific aspects of the dialogue such as time and events.\n13Human evaluators are only given a minimal description of the task, without any examples, before beginning the evaluation.\n0.018 0.007 LSTM LSTM 0.016 (Nwnmn HRED 0.006 HRED G-VHRED orma G-VHRED 0.014 H-VHRED H-VHRED 0.005 0.012 N es 0.010 aaite 0.004 0.008 eqoud 0.003 0.006 onse 0.002 0.004 oonse 0.002 0.001 ees Rasp R 0.000 0.000 das\nFigure 2: Probabilities for different responses related to time and events: left) probabilities for giving a one-word response with one of the weekdays (monday, tuesday, ..., sunday) conditioned on the context utterance \"when do you want to meet this weekend?\", right) probabilities forgiving a one-. word response with one of several event-related nouns (school, class, ..., wedding) on the context utterance \"where are you going tomorrow?\". The probabilities have been normalized in log-space. by the number of words in the response including end-of-utterance tokens. For G-VHRED and H-VHRED, the probabilities were estimated using the variational lower-bound over 10 samples.."}, {"section_index": "9", "section_name": "7 CONCLUSIONS", "section_text": "In this paper, we have proposed the multi-modal variational encoder-decoder framework. In order to capture complex aspects of unknown data distributions, we developed the piecewise constant prior, which can be efficiently and flexibly adjusted to capture distributions with many modes, such as those over topics. In experiments on document modeling and dialogue modeling, we have showr the effectiveness of our framework in building models capable of learning richer structure from data In particular, we have demonstrated new state-of-the-art results on several document modeling tasks\nFuture work should focus on exploring other natural language processing tasks, where multi- modality plays an important role such as modeling technical help dialogues (Lowe et al.|2015) and online debates (Rosenthal & McKeown]2015), and where additional information is available such as in semi-supervised document categorization (Ororbia II et al.f2015a). Furthermore, the piecewise variables proposed in this work could prove useful in uncovering interesting and novel information in lesser-explored corpora.\nJ. Bornschein and Y. Bengio. Reweighted wake-sleep. In ICLR 2015, 2014\nR.Bowman, L Vilnis, O. Vinyals, A. M. Dai, R. Jozefowicz. Bengio. Generating sentences from a continuous space. In Conference on Computational Natural Language Learning, 2015. Y. Burda, R. Grosse, and R. Salakhutdinov. Importance weighted autoencoders. arXiv preprint arXiv:1509.00519, 2015. A. Cardoso-Cachopo. Improving Methods for Single-label Text Categorization. PdD Thesis, Insti. tuto Superior Tecnico, Universidade Tecnica de Lisboa, 2007. C. Chelba, T. Mikolov, M. Schuster, Q. Ge, T. Brants, P. Koehn, and T. Robinson. One billion word benchmark for measuring progress in statistical language modeling. In INTERSPEECH, 2014.\nN. Crook, R. Granell, and S. Pulman. Unsupervised classification of dialogue acts using a dirichlet. process mixture model. In Proceedings of the SIGDIAL 2009 Conference: The 1Oth Annual Meeting of the Special Interest Group on Discourse and Dialogue, pp. 341-348. Association for Computational Linguistics, 2009. P. Dayan and G. E. Hinton. Varieties of helmholtz machine. Neural Networks, 9(8):1385-1403. 1996. L. Devroye. Sample-based non-uniform random variate generation. In Proceedings of the 18th conference on Winter simulation, pp. 260-265. ACM, 1986. J. L. Durrieu, J. P. Thiran, and F. Kelly. Lower and upper bounds for approximation of the kullback-. leibler divergence between gaussian mixture models. In 2012 IEEE International Conference on. Acoustics, Speech and Signal Processing (ICASsP), pp. 4833-4836. Ieee, 2012.\n1alOCl.Hllla esluaenls HDO1a Fortune, 2016. K. Gregor, I. Danihelka, A. Graves, and D. Wierstra. DRAW: A recurrent neural network for image. generation. In International Conference on Learning Representations (ICLR), 2015. G. E. Hinton and R. Salakhutdinov. Replicated softmax: an undirected topic model. In Y. Bengio,. D. Schuurmans, J. D. Lafferty, C. K. I. Williams, and A. Culotta (eds.), Advances in Neural. Information Processing Systems 22, pp. 1607-1614. Curran Associates, Inc., 2009. G. E. Hinton and R. S. Zemel. Autoencoders, minimum description length and helmholtz free. energy. In J. D. Cowan, G. Tesauro, and J. Alspector (eds.), Advances in Neural Information. Processing Systems 6, pp. 3-10. Morgan-Kaufmann, 1994. G. E. Hinton, P. Dayan, B. J. Frey, and R. M. Neal. The\" wake-sleep\" algorithm for unsupervised. neural networks. Science, 268(5214):1158, 1995. T. Hofmann. Probabilistic latent semantic indexing. In Proceedings of the 22nd annual inter-. national ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 50-57. ACM, 1999. M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to variational methods for oranhical models MachineIearninc 37(2):183-233.1999\nM. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to variational methods for graphical models. Machine Learning, 37(2):183-233, 1999\nAnders Boesen Lindbo Larsen, Sgren Kaae Spnderby, and Ole Winther. Autoencoding beyond pixels using a learned similarity metric. arXiv preprint arXiv:1512.09300, 2015.\nS. Lauly, Y. Zheng, A. Allauzen, and H. Larochelle. Document neural autoregressive distribution estimation. arXiv preprint arXiv:1603.05962, 2016. J. Li, M. Galley, C. Brockett, J. Gao, and B. Dolan. A diversity-promoting objective function for neural conversation models. In The North American Chapter of the Association for Computational Linguistics (NAACL), 2016.\nN. Crook, R. Granell, and S. Pulman. Unsupervised classification of dialogue acts using a dirichlet process mixture model. In Proceedings of the SIGDIAL 2009 Conference: The 1Oth Annual Meeting of the Special Interest Group on Discourse and Dialogue, pp. 341-348. Association for Computational Linguistics, 2009. P. Dayan and G. E. Hinton. Varieties of helmholtz machine. Neural Networks, 9(8):1385-1403. 1996. L. Devroye. Sample-based non-uniform random variate generation. In Proceedings of the 18th conference on Winter simulation, pp. 260-265. ACM, 1986. J. L. Durrieu, J. P. Thiran, and F. Kelly. Lower and upper bounds for approximation of the kullback- leibler divergence between gaussian mixture models. In 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4833-4836. Ieee, 2012. M. Farber. Amazon's 'Alexa Prize' Will Give College Students Up To $2.5M To Create A Socialbot. Fortune, 2016. K. Gregor, I. Danihelka, A. Graves, and D. Wierstra. DRAW: A recurrent neural network for image generation. In International Conference on Learning Representations (ICLR), 2015. G. E. Hinton and R. Salakhutdinov. Replicated softmax: an undirected topic model. In Y. Bengio, D. Schuurmans, J. D. Lafferty, C. K. I. Williams, and A. Culotta (eds.), Advances in Neural Information Processing Systems 22, pp. 1607-1614. Curran Associates, Inc., 2009. G. E. Hinton and R. S. Zemel. Autoencoders, minimum description length and helmholtz free energy. In J. D. Cowan, G. Tesauro, and J. Alspector (eds.), Advances in Neural Information Processing Systems 6, pp. 3-10. Morgan-Kaufmann, 1994. G. E. Hinton, P. Dayan, B. J. Frey, and R. M. Neal. The\" wake-sleep\" algorithm for unsupervised neural networks. Science, 268(5214):1158, 1995. T. Hofmann. Probabilistic latent semantic indexing. In Proceedings of the 22nd annual inter- national ACM SIGIR Conference on Research and Development in Information Retrieval, pp 50-57. ACM, 1999. M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to variational methods for graphical models. Machine Learning, 37(2):183-233, 1999 A. Kannan, K. Kurach, et al. Smart Reply: Automated Response Suggestion for Email. In KDD, 2016.\nYishu Miao, Lei Yu, and Phil Blunsom. Neural variational inference for text processing. arXi preprint arXiv:1511.06038, 2015.\nR. M. Neal. Connectionist learning of belief networks. Artificial intelligence, 56(1):71-113, 1992\n. Maalge, C. K. Sonderby, S. K. Sonderby, and O. Winther. Auxiliary deep generative models arXiv preprint arXiv:1602.05473, 2016. J. Markoff and P. Mozur. For Sympathetic Ear, More Chinese Turn to Smartphone Program. New. York Times, 2015.\nProcessing (EMNLP), Lisbon, Portugal, 2015a., 2015a A. G. Ororbia II, C. L. Giles, and D. Reitter. Online semi-supervised learning with deep hybrid boltzmann machines and denoising autoencoders. arXiv preprint arXiv:1511.06964, 2015b. R. Pascanu, T. Mikolov, and Y. Bengio. On the difficulty of training recurrent neural networks.. ICML, 28, 2012. D. J. Rezende and S. Mohamed. Variational inference with normalizing flows. arXiv preprint. arXiv:1505.05770, 2015. D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate in. ference in deep generative models. In International Conference on Machine Learning (ICML),. 2014. A. Ritter, C. Cherry, and W. B. Dolan. Data-driven response generation in social media. In Pro-. ceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 583-593.. Association for Computational Linguistics (ACL), 2011.\ntiondl Conference on Artificidl Intelligence dnd Stdtistic. AIAIS,PP.O9J- T. Salimans, D. P Kingma, M. Welling, et al. Markov chain monte carlo and variational inference: Bridging the gap. In International Conference on Machine Learning (ICML), pp. 1218-1226, 2015. R. Sennrich, B. Haddow, and A. Birch. Neural machine translation of rare words with subword units. In Association for Computational Linguistics (ACL), 2016. I. V. Serban, A. Sordoni, Y. Bengio, A. Courville, and J. Pineau. Building end-to-end dialogue sys- tems using generative hierarchical neural network models. In Thirtieth AAAI Conference (AAAI), pp. 3776-3784, 2016a.\nI. V. Serban, A. Sordoni, R. Lowe, L. Charlin, J. Pineau, A. Courville, and Y. Bengio. A hierarchical latent variable encoder-decoder model for generating dialogues. arXiv preprini arXiv:1605.06069. 2016b A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell, J. Nie, J. Gao, and B. Dolan. A neu ral network approach to context-sensitive generation of conversational responses. In Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HL7 2015), 2015. In press. N. Srivastava, R. R Salakhutdinov, and G. E. Hinton. Modeling documents with deep boltzmann machines. arXiv preprint arXiv:1309.6865, 2013. B. Uria, I. Murray, and H. Larochelle. A deep and tractable density estimator. In International Conference on Machine Learning (ICML), pp. 467-475, 2014. K. Zhai and J. D. Williams. Discovering latent structure in task-oriented dialogues. In Association for Computational Linguistics (ACL), pp. 36-46, 2014."}, {"section_index": "10", "section_name": "APPENDIX A: ANALYSIS OF DOCUMENT MODEL PIECEWISE VARIABLES", "section_text": "We present an additional analysis of the learned 20 News-Groups document models in order to ex plore what each set of latent variables might be capturing. To calculate the gradient of the KL term. needed to formulate word scores, we follow the approach described in Sub-section |6.2 however conditioning only on the (training) document bag-of-words to compute the latent posterior to ther calculate the gradient of the KL-terms with respect to each word in the document..\nIn Table 4] we observe results similar to those of Sub-section[6.2-the piecewise variables capture different aspects of the document data. It is worth noting, in this experiment, that the Gaussian. variables alone were originally were sensitive to some of these words. However, in the hybrid. model, nearly all of the temporal words that the Gaussian variables were once more sensitive to now. more strongly affect the piecewise variables, which themselves also capture all of the words that. were originally missed. This might indicate a shift in responsibility in which latent variables the. document model decide are more suitable to capture certain aspects of the data. This effect appears. to be even stronger in the case of certain nationality-based adjectives (e.g., \"american\", \"israeli\",. etc.). While the G-NVDM could model multi-modality in the data to some degree, this work would. be primarily done in the model's decoder. In the H-NVDM, the piecewise variables provide an. explicit mechanism for capturing modes in the unknown target distribution, so it makes sense that the model would learn to use the piecewise variables instead, thus freeing up the Gaussian variables. to capture other aspects of the data, as we found was the case with names (e.g., \"jesus\", \"kent', etc.)..\nTable 4: Approximate posterior word encodings on 20 News-Groups. For P-KL, we also bold every case where the piecewise variables showed greater sensitivity to the word than the Gaussian variables within the same hybrid model..\nWord G-NVDM H-NVDM-5 Word G-NVDM H-NVDM-5 Time-related G-KL G-KL P-KL Names G-KL G-KL P-KL months 23 33 40 henry 33 47 39 day 28 32 35 tim 32 27 11 time 55 22 40 mary 26 51 30 century 28 13 19 james 40 72 30 past 30 18 28 jesus 28 87 39 days 37 14 19 26 56 george 29 ahead 33 20 33 keith 65 94 61 years 44 16 38 kent 51 56 15 today 46 27 71 chris 38 55 28 back 31 30 47 thomas 19 35 future 20 19 20 15 42 26 hitler 10 14 9 order 14 minute 15 34 40 paul 25 52 18 began 16 5 13 mike 38 76 40 night 49 12 18 bush 21 20 14 hour 18 17 16 early 42 42 69 Adjectives G-KL G-KL P-KL yesterday 25 26 36 50 12 american 40 year 60 17 21 25 21 german 22 week 28 54 58 20 17 european 27 hours 20 26 31 muslim 19 7 23 minutes 40 34 38 french 11 17 17 months 23 33 40 canadian 18 10 16 history 32 18 28 japanese 16 9 24 late 41 45 31 jewish 56 37 54 moment 23 17 16 english 19 16 26 season 45 29 37 islamic 14 18 28 summer 29 28 31 israeli 24 14 18 start 30 14 38 british 35 15 17 continue 21 32 34 22 27 35 russian 14 19 20 happened"}] |
H1hoFU9xe | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Recently developed Generative Adversarial Networks (GAN, see Goodfellow et al.[(2014)) ar powerful generative models, the main idea of which is to train a generator and a discriminator network through playing a minimax game. In the image domain, for a dataset generated by some density Pdata(x) a generator G attempts to approximate the image generating distribution and to synthesize as realistic image as possible, while a discriminator D strives to distinguish real images from fake ones.\nThere are several modifications of GAN that can generate realistic images:\nIn the present study we apply the DCGAN model to the problem of secure steganography. We construct a special container-image generator, synthetic output of which is less susceptible to suc cessful steganalysis compared to containers, directly derived from original images. In particular, we investigate whether this methodology allows to deceive a given steganography analyzer, represented by a binary classifier detecting presence of hidden messages in an image."}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Steganography is collection of methods to hide secret information (\"payload') within non-secret information (\"container'). Its counterpart, Steganalysis, is the practice of determining if a message contains a hidden payload, and recovering it if possible. Presence of hidden payloads is typically detected by a binary classifier In the present study, we propose a new model for generating image-like containers based on Deep Convolutional Generative Adversaria1 Networks (DCGAN). This approach allows to generate more setganalysis-secure message embedding using standard steganography algorithms. Experiment results demonstrate that the new model successfully deceives the steganography analyzer, and for this reason, can be used in steganographic applications.\nDeep Convolutional Generative Adversarial Networks (DCGAN, see|Radford et al.(2015) - this model is a modification of a GAN, specialized for generation of images; Conditional GAN - it allows generating objects from a specified class, see Mirza & Osindero(2014); Generation of images from textual description, see Reed et al.(2016)\nSteganography is the practice of concealing a secret message, e.g. a document, an image, or a video. within another non-secret message in the most inconspicuous manner possible. In this paper we consider a text-to-image embedding, with the text given by bit string. More formally, for a message T and an image I, a steganography algorithm is a map S : T I -> I, where I is an image, containing. the message T, such that I can not be visually distinguished from I..\nThe most popular and easy-to-implement algorithm of embedding is the Least Significant Bit (LSB) algorithm. The main idea of LSB is to store the secret message in the least significant bits (last bits) of some color channel of each pixel in the given image container. Since pixels are adjusted independently of each other, the LSB algorithm alters the distribution of the least significant bits thereby simplifying detection of the payload. A modification of this method, which does not substantially alter the distribution of the least significant bits, is a so-called 1-embedding (Ker 2005). This approach randomly adds or subtracts 1 from some color channel pixel so that the last bits would match the ones needed. In this paper we basically consider the 1-embedding algorithm.\nn1 n2 D(X,X)=p(Xij,Xij)|Xij-Xij| i=1 j=1\nwhere p(Xi, Xt) is the cost of changing pixel of X, specific for each particular steganography algorithm.\nFor detecting presence of hidden information in the container Steganalysis is usually used. The stage. which distinguishes images with some hidden message from empty is usually performed by binary. classification. The basic approach to steganalysis is based on feature extractors (such as SPAM (Pevny. et al.2010), SRM (Fridrich & Kodovsky]2012), etc.) combined with traditional machine learning classifiers, such as SVM, decision trees, ensembles etc. With the recent overwhelming success of deep neural networks, newer neural network based approaches to steganalysis are gaining popularity. Qian et al.(2015b). For example, in Pibre et al.(2015) authors propose to use deep convolution neural networks (CNN) for steganalysis and show that classification accuracy can be significantly increased while using CNN instead of usual classifiers..\nThe main idea of such approach to learning is that two neural networks are trained simultaneously\nL(D,G) = Ex~Rdata( z(x) [log D(x)] + Ez~pnoise(z) [log(1 - D(G(z))] > min ma.\nThere are more sophisticated algorithms for information embedding to raster images: WOw (Holub. & Fridrich]2012), HUGO (Pevny et al.]2010), S-UNIWARD (Holub et al.[2014), and others. They are derived from key ideas of the LSB algorithm, but utilize a more strategic pixel manipulation technique: for the raw image X and its final version with a secret message X the pixels are picked in. such a way as to minimize the distortion function. N1 n2\na generative model (G) that receives noise from the prior distribution pnoise(z) on input anc. transforms it into a data sample from the distribution pg(x) that approximates pdata(x);. a discriminative model (D) which tries to detect if an object is real or generated by G\nThe learning process can be described as a minimax game: the discriminator D maximizes the expected log-likelihood of correctly distinguishing real samples from fake ones, while the generator G maximizes the expected error of the discriminator by trying to synthesize better images. Therefore during the training GAN solve the following optimization problem:\nCoupled optimization problem (1) is solved by alternating the maximization and minimization steps:. on each iteration of the mini-batch stochastic gradient optimization we first make a gradient ascent step on D and then a gradient descent step on G. If by 0m we denote the parameters of the neural. network M, then the update rules are:.\nKeeping the G fixed, update the model D by 0p < 0p + yppL with {Ex~Pdata(x)[1og D(x,0D)]+Ez~Pnoisc(z) [l0g(1- D(G(z,0G), 0D)) VD1 2\nV pL Ex~pdata(x) [log D(x,0D)]+Ez~pnoise(z) [log(1- D(G(z,0G),0D))]\nFigure 1: Sample synthetic images generated by DCGAN\nSince we want the generator to produce realistic images that could serve as containers for secure. message embedding, we force G to compete against the models D and S simultaneously. If we denote by S(x) the probability that x has some hidden information, then we arrive at the following game:\nx~Pdata(x) [log D(x)] +Ez~pnoise(z)[log(1- D(G(z))) ] +(1 - )Ez~Pnoise(z) [log S(Stego(G(z))) + log(1 - S(G(z)))] min max max G D S\nEx~Pdata(x) [log D(x)] + Ez~pnoise(z) [log(1- D(G(z))) Q)Ezo (z) [log S(Stego(G(z))) + log(1 - S(G(z)))] -> min max max\nWe use a convex combination of errors of D and S with parameter a E [0, 1], which controls the. trade-off between the importance of realism of generated images and their quality as containers\na VGL = 00G\nIn Radford et al.(2015) the GAN idea was extended to deep convolutional networks (DCGAN), which are specialized for image generation. The paper discusses the advantages of adversarial training in image recognition and generation, and give recommendations on constructing and training DCGANs In fig.1|we depict a sample of synthetic images of a freshly trained DCGAN on the Celebrities dataset (Ziwei Liu & Tang2015). The images indeed look realistic, albeit with occasional artifacts\nIn order to apply GAN methodology to steganographic applications, we introduce Steganographic Generative Adversarial Networks model (SGAN), which consists of.\na generator network G, which produces realistic looking images from noise;. . a discriminator network D, which classifies whether an image is synthetic or real; . a discriminator network S, the steganalyser, which determines if an image contains. concealed secret message.\nagainst the steganalysis. Analysis of preliminary experimental results showed that for a < 0.7 the generated images are unrealistic and resemble noise..\nThe full scheme of SGAN is presented in fig.2] Each arrows represent output- input data flows\nD - Real/Fake Data S - Steganalyzer Discriminator G - Generator Information Noise Network Embedding\nFigure 2: SGAN information flow diagram\nStochastic mini-batch Gradient descent update rules for comp ponents of SGAN are listed below.\nThe main distinction from the GAN model is that we update G in order to maximize not only the error of D, but to maximize the error of the linear combination of the classifiers D and S."}, {"section_index": "2", "section_name": "In our experiments'|we use the Celebrities dataset (Ziwei Liu & Tang 2015) that contains 200 000 images. All images were cropped to 64 64 pixels.", "section_text": "For steganalysis purposes we consider 10% of data as a test set. We denote the train set by A, the test set by B and steganography algorithms used for hiding information by Stego(x). After embedding some secret information we get the train set A + Stego(A) and the test set B + Stego(B), We end up with 380 000 images for steganalysis training and 20 000 for testing. For training the SGAN model we used all 200 000 cropped images. After 8 epochs of training our SGAN produces images. displayed in fig. 3\nFor information embedding we use the 1-embedding algorithm with a payload size equal to 0. bits per pixel for only one channel out of three. As a text for embedding we use randomly selected excerpts from some article from The New York Times.\n1 Code is available at https://github.com/dvolkhonskiy/adversarial-steganography\nfor D the rule is 0p < 0p +ypVgL with ~Pdata(x) [log D(x,0D)]+Ez~Pnoise(z) [log(1- D(G(z,0G),0D) for S (it is updated similarly to D): 0s < 0s + ys sL where V sL Ez~pnoise(z) [log S(Stego(G(z,0G)),0s) + log(1- S(G(z,0g),0s))] j d0s for the generator G: 0g < 0g ygVgL with VgL given by qEz~pnoise(z)[log(1- D(G(z,0G),0D))] d0G x)Ez~pnoise(z)[log(S(Stego(G(z,0G),0s)))] Pnoise(z)[log(1- S(G(z,0G),0s))]\nV G1 x~Pdata(x) [log D(x,0D)]+Ez~pnoise(z) [log(1- D(G(z,0G),0D))] d0D\nd Ez~Pnoise(z) [log S(Stego(G(z,0g),0s) + log(1- S(G(z,0G),0s))] V sL d0s\nQEz~Pnoise(z) [log(1- D(G(z,0G),0D))] d0G )Ez~Pnoise(z) [log(S(Stego(G(z,0G),0s)))] d0c Ez~Pnoise(z)[log(1- S(G(z,0G),0s))] d0G\nFigure 3: Examples of images, generated by SGAN after training for 8 epochs on the Celebrities dataset\nIn this section we describe the SGAN model structure. By C2D-BN-LR we denote the following structural block of a convolutional neural network: Conv2d -> Batch Normalization -> Leaky ReLU\nThe Steganalyser network S and the Image Discriminator network have similar structure: four C2D-BN-LR layers, then a fully connected layer (1 neuron) -> Sigmoid function is used to compute an output. The Image generator network G is (in order) a fully-connected layer (8192 neurons), four C2D-BN-LR with Fractional-Strided convolution, then the Hyperbolic tangent function layer is used to compute normalised output.\nThe SGAN model is trained to solve (4) using the Adam optimization algorithm (Kingma & Ba]2014 with the learning rate 2-4 and update parameters 1 = 0.5 and 2 = 0.999. For each mini-batch o images we update weights of D and S once, then we update weights of G twice\nIn the following experiments, in addition to the steganalyser S we use an independent steganalyser S*. We define a filter F(0) that is special for steganalysis applications (seePibre et al.(2015),Qian et al.[(2015a),Tan & Li (2014), as follows\n-1 2 -2 2 -1 2 -6 8 -6 2 1 -2 8 -12 8 -2 12 2 -6 8 -6 2 -1 2 -2 2 -1\nThe structure of the individual steganalyser S* has the form: 2D convolution with F(0) filter -) Conv2D -> Conv2D -> Max Pooling -> Conv2D -> Conv2D -> Max Pooling -> Fully connected layer (1024 neurons) -> Fully connected layer (1 neuron) -> Sigmoid function for output. This structure provides state-of-the-art steganalysis accuracy,Pibre et al.(2015), and the filter F(0) allows to increase convergence speed of the steganalyser S* training.\nThe setup of experiments can be described as follows:\nFor training of this steganalyser we use the Adam optimization algorithm on the loss (4) with the learning rate equal to 5-6, 1 = 0.9, 2 = 0.999. As a loss function we use a binary cross-entropy..\nIn this set of experiments we train the independent steganalyser S* on real images. Results are provided in tab.1] From the results we conclude that even the usual DCGAN generate synthetic\nTable 1: Accuracy of the steganalyser S* trained on real images\ncontainer images, that can easily deceive a steganalyser. Although containers generated by on the SGAN model, which is specially designed to deceive the steganalyser, improve on the result of DCGAN, the improvement is not significant. These results also suggest that S* may have been overfit. to the data distribution of the real images, and failed to generalize to the GAN approximation of the data distribution in the synthetic image domain..\nAnother interesting result follows from an experiment with HUGO steganography algorithm and SGAN generated containers:"}, {"section_index": "3", "section_name": "5.4 TRAINING/TESTING ON GENERATED IMAGES", "section_text": "Table 2: Training/testing on generated images according to experimental conditions C1-C3\nExperimental Conditions Accuracy C1 0.982 C2 0.517 C3 0.499\nThe experiment results in tab.2lindicate that using different seed values when generating containers most likely affects the distribution of pixel value bits in such a way as make it easier to deceive\nWe train and use the SGAN and/or DCGAN model to generate images to be used as containers; We train the independent steganalyser S* using either real images (sec.5.3) or generated images (sec.5.4); We measure the accuracy of the steganalyser S*\nWe use the 1-embedding algorithm for training the SGAN model;. We test the HUGO steganography analyzer on real images and on images, generated by tl SGAN model.\nContainers generated by the SGAN model decrease HUGO steganalysis accuracy from 0.624 to 0.499, which demonstrates that the Steganographic Generative Adversarial Networks can potentially be used as a universal tool for generating Steganography containers tuned to deceive any specific steganalysis algorithm.\nIn this set of experiments we train the steganalyser S* on images, generated using the DCGAN model from the prior noise distribution pnoise(z) for some fixed seed value. In all experiments the size of. the train set is fixed at ~ 160 000. After training we test the analyser S* on images generated with the DCGAN model according to the following experimental setups:.\nC1.We use the same seed value: C2. We use some randomly selected seed value; C3. We use the same seed value, as in C2, and we additionally tune the DCGAN model fc several epochs.\nthe steganalyser, fitted to another distribution in the train sample. Additional tuning of the image generator G make this effect even more pronounced..\nn the next set of experiments we train and test the steganalyser S* on images, generated accordin to the following experimental conditions:\nWe also conduct an experiment with classification of generated images without steganographic. embeddings. For this purposes we train a DCGAN conditional model on the MNIST dataset, and. train a separate classifier for the MNIST classification task. The trained classifier achieved almost perfect accuracy both on the held-out real MNIST dataset, and on synthetic images produced by the. DCGAN. This provides evidence that it is possible to train an image classifier that shows acceptable accuracy both on real and synthetic images. However it is the artificial generation of image containers that breaks the usual approaches to steganalysis.\nThe research was supported solely by the Russian Science Foundation grant (project 14-50-00150). The authors would like to thank I. Nazarov for his assistance in preparation of this paper."}, {"section_index": "4", "section_name": "REFERENCES", "section_text": "Jessica Fridrich and Jan Kodovsky. Rich models for steganalysis of digital images. Informatior Forensics and Security, IEEE Transactions on, 7(3):868-882, 2012\nC4. We generate a train set for the steganalyser S* using several different randomly selected seed values, and when generating the test set we use another fixed seed value;. C5. We generate the train set and the test set using a number of different randomly selected seed. values; C6. We use the same train and test sets, as in C5, and we additionally train the DCGAN model for several epochs.\nTable 3: Training/testing on generated images according to experimental conditions C4-C6\nExperimental Conditions Accuracy C4 0.649 C5 0.630 C6 0.581\nAccording to tab.3|the accuracy in case C5 is lower than in the C4 case, which can be explained by the test set of C5 having more variability, being generated with different randomly selected seed. values. Similarly, the accuracy in the C4 case is higher than in C2, since in C4 the train set was. generated with several different randomly selected seed values, and thus is more representative. These. observations confirm out initial conclusions, drawn from tab.2\n1. We open a new field for applications of Generative Adversarial Networks, namely, container generation for steganography applications; 2. We consider the 1-embedding algorithm and test novel approaches to more steganalysis-. secure information embedding: a) we demonstrate that both SGAN and DCGAN models are capable of decreasing the. detection accuracy of a steganalysis method almost to that of a random classifier;. b) if we initialize a generator of containers with different random seed values, we can. even further decrease the steganography detection accuracy.\nMehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprir arXiv:1411.1784, 2014\nLionel Pibre, Pasquet Jerome, Dino Ienco, and Marc Chaumont. Deep learning for steganalysis is better than a rich model with an ensemble classifier, and is natively robust to the cover source- mismatch. arXiv preprint arXiv:1511.04855, 2015.\nAlec Radford. Luke Metz, and Soumith Chintala. Unsupervised representation learning with dee convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015\nScott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee Generative adversarial text to image synthesis. arXiv preprint arXiv:1605.05396, 2016.\nXiaogang Wang Ziwei Liu, Ping Luo and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), 2015.\nIan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair Aaron Courville, and Yoshua Bengio. Generative adversarial nets. pp. 2672-2680, 2014..\nAndrew D Ker. Resampling and the detection of lsb matching in color bitmaps. In Electronic Imaging 2005, pp. 1-15. International Society for Optics and Photonics, 2005.\nTomas Pevny, Patrick Bas, and Jessica Fridrich. Steganalysis by subtractive pixel adjacency matrix information Forensics and Security. 5(2):215-224. 2010"}] |
ry3iBFqgl | [{"section_index": "0", "section_name": "NEWSOA: A MACHINE COMPREHENSION DATASET", "section_text": "Adam Trischler*\nAlessandro Sordoni\n{adam.trischler, tong.wang, eric.yuan, justin.harris, alessandro.sordoni, phil.bachman, k.suleman} @maluuba.c"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Almost all human knowledge is recorded in the language of text. As such, comprehension of written. language by machines, at a near-human level, would enable a broad class of artificial intelligence. applications. In human students we evaluate reading comprehension by posing questions based. on a text passage and then assessing a student's answers. Such comprehension tests are appealing because they are objectively gradable and may measure a range of important abilities, from basic. understanding to causal reasoning to inference (Richardson et al.]2013). To teach literacy to machines,. the research community has taken a similar approach with machine comprehension (MC)..\nRecent years have seen the release of a host of MC datasets. Generally, these consist of (document.. question, answer) triples to be used in a supervised learning framework. Existing datasets vary in size difficulty, and collection methodology; however, as pointed out byRajpurkar et al.(2016), most suffer. from one of two shortcomings: those that are designed explicitly to test comprehension (Richardson. et al.2013) are too small for training data-intensive deep learning models, while those that are. sufficiently large for deep learning (Hermann et al.[[2015][Hill et al.]2016] Bajgar et al.][2016) are generated synthetically, yielding questions that are not posed in natural language and that may not. test comprehension directly (Chen et al.[2016). More recently, Rajpurkar et al.[(2016) sought to. overcome these deficiencies with their crowdsourced dataset, SQuAD..\nHere we present a challenging new largescale dataset for machine comprehension: NewsQA. NewsQA contains 119,633 natural language questions posed by crowdworkers on 12,744 news articles from CNN. Answers to these questions consist in spans of text within the corresponding article highlighted by a distinct set of crowdworkers. To build NewsQA we utilized a four-stage collection process designed to encourage exploratory, curiosity-based questions that reflect human information seeking CNN articles were chosen as the source material because they have been used in the past (Hermann et al.[[2015) and, in our view, machine comprehension systems are particularly suited to high-volume rapidly changing information sources like news.\n* These three authors contributed equally\nJustin Harris"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We present NewsQA, a challenging machine comprehension dataset of over 100,000 question-answer pairs. Crowdworkers supply questions and answers based on a set of over 10,O00 news articles from CNN, with answers consisting in spans of text from the corresponding articles. We collect this dataset through a four stage process designed to solicit exploratory questions that require reasoning. A thorough analysis confirms that NewsQA demands abilities beyond simple word matching and recognizing entailment. We measure human performance on the dataset and compare it to several strong neural models. The performance gap between humans and machines (0.198 in F1) indicates that significant progress can be made on NewsQA through future research. The dataset is freely available at\nAs Trischler et al.(2016a), Chen et al.(2016), and others have argued, it is important for datasets to be sufficiently challenging to teach models the abilities we wish them to learn. Thus, in line with Richardson et al.(2013), our goal with NewsQA was to construct a corpus of questions that necessitates reasoning mechanisms, such as synthesis of information across different parts of an article. We designed our collection methodology explicitly to capture such questions.\nThe challenging characteristics of NewsQA that distinguish it from most previous comprehensior tasks are as follows:\nNewsOA follows in the tradition of several recent comprehension datasets. These vary in size difficulty, and collection methodology, and each has its own distinguishing characteristics. We agree with Bajgar et al.(2016) who have said \"models could certainly benefit from as diverse a collection of datasets as possible.\"' We discuss this collection below.\nMCTest (Richardson et al.]2013) is a crowdsourced collection of 660 elementary-level children's. stories with associated questions and answers. The stories are fictional, to ensure that the answer must. be found in the text itself, and carefully limited to what a young child can understand. Each question comes with a set of 4 candidate answers that range from single words to full explanatory sentences. The questions are designed to require rudimentary reasoning and synthesis of information across. sentences, making the dataset quite challenging. This is compounded by the dataset's size, which. limits the training of expressive statistical models. Nevertheless, recent comprehension models have. performed well on MCTest (Sachan et al.]2015) Wang et al.[2015), including a highly structured neural model (Trischler et al.|2016a). These models all rely on access to the small set of candidate. answers, a crutch that NewsQA does not provide..\nThe CNN/Daily Mail corpus (Hermann et al.]2015) consists of news articles scraped from those outlets with corresponding cloze-style questions. Cloze questions are constructed synthetically by deleting a single entity from abstractive summary points that accompany each article (written presumably by human authors). As such, determining the correct answer relies mostly on recognizing textual entailment between the article and the question. The named entities within an article are identified and anonymized in a preprocessing step and constitute the set of candidate answers; contrast this with NewsOA in which answers often include longer phrases and no candidates are given.\nBecause the cloze process is automatic, it is straightforward to collect a significant amount of data to support deep-learning approaches: CNN/Daily Mail contains about 1.4 million question-answer. pairs. However,Chen et al.[(2016) demonstrated that the task requires only limited reasoning and, in\n1. Answers are spans of arbitrary length within an article, rather than single words or entities. 2. Some questions have no answer in the corresponding article (the null span). 3. There are no candidate answers from which to choose.. 4. Our collection process encourages lexical and syntactic divergence between questions and. answers. 5. A significant proportion of questions requires reasoning beyond simple word- and context- matching (as shown in our analysis)..\nn this paper we describe the collection methodology for NewsQA, provide a variety of statistics tc characterize it and contrast it with previous datasets, and assess its difficulty. In particular, we measure human performance and compare it to that of two strong neural-network baselines. Unsurprisingly humans significantly outperform the models we designed and assessed, achieving an F1 score of 0.694 versus 0.496 for the best-performing machine. We hope that this corpus will spur further advances on the challenging task of machine comprehension."}, {"section_index": "3", "section_name": "fact, performance of the strongest models (Kadlec et al.]2016] Trischler et al.2016b) Sordoni et a 2016) nearly matches that of humans.", "section_text": "The Children's Book Test (CBT) (Hill et al.]2016) was collected using a process similar to that of. CNN/Daily Mail. Text passages are 20-sentence excerpts from children's books available through Project Gutenberg; questions are generated by deleting a single word in the next (i.e., 21st) sentence.. Consequently, CBT evaluates word prediction based on context. It is a comprehension task insofar as comprehension is likely necessary for this prediction, but comprehension may be insufficient and other mechanisms may be more important.."}, {"section_index": "4", "section_name": "2.4 BOOKTEST", "section_text": "Bajgar et al.(2016) convincingly argue that, because existing datasets are not large enough, we have. yet to reach the full capacity of existing comprehension models. As a remedy they present BookTest This is an extension to the named-entity and common-noun strata of CBT that increases their size by over 60 times.Bajgar et al.[(2016) demonstrate that training on the augmented dataset yields a. model (Kadlec et al.[2016) that matches human performance on CBT. This is impressive and suggests that much is to be gained from more data, but we repeat our concerns about the relevance of story. prediction as a comprehension task. We also wish to encourage more efficient learning from less data."}, {"section_index": "5", "section_name": "2.5 SQuAD", "section_text": "The comprehension dataset most closely related to NewsQA is SQuAD (Rajpurkar et al.]2016). It consists of natural language questions posed by crowdworkers on paragraphs from high-PageRank Wikipedia articles. As in NewsQA, each answer consists of a span of text from the related paragraph and no candidates are provided. Despite the effort of manual labelling, SQuAD's size is significant and amenable to deep learning approaches: 107,785 question-answer pairs based on 536 articles.\nSQuAD is a challenging comprehension task in which humans far outperform machines. The. authors measured human accuracy at 0.905 in F1 (we measured human F1 at 0.807 using a different methodology), whereas at the time of the writing, the strongest published model to date achieves only. 0.700 in F1 (Wang & Jiang2016b).\nWe collected NewsQA through a four-stage process: article curation, question sourcing, answer. sourcing, and validation. We also applied a post-processing step with answer agreement consolidation and span merging to enhance the usability of the dataset."}, {"section_index": "6", "section_name": "3.1 ARTICLE CURATION", "section_text": "We retrieve articles from CNN using the script created byHermann et al.(2015) for CNN/Daily. Mail. From the returned set of 90,266 articles, we select 12,744 uniformly at random. These cover a wide range of topics that includes politics, economics, and current events. Articles are partitioned a random into a training set (90%), a development set (5%), and a test set (5%)..\nIt was important to us to collect challenging questions that could not be answered using straightforwarc word- or context-matching. Like Richardson et al.[(2013) we want to encourage reasoning ir comprehension models. We are also interested in questions that, in some sense, model human curiosity and reflect actual human use-cases of information seeking. Along a similar line, we conside. it an important (though as yet overlooked) capacity of a comprehension model to recognize wher given information is inadequate, so we are also interested in questions that may not have sufficient evidence in the text. Our question sourcing stage was designed to solicit questions of this nature, and deliberately separated from the answer sourcing stage for the same reason.\nQuestioners (a distinct set of crowdworkers) see only a news article's headline and its summar points (also available from CNN); they do not see the full article itself. They are asked to formulat a question from this incomplete information. This encourages curiosity about the contents of th full article and prevents questions that are simple reformulations of sentences in the text. It als increases the likelihood of questions whose answers do not exist in the text. We reject questions tha have significant word overlap with the summary points to ensure that crowdworkers do not treat th summaries as mini-articles, and further discouraged this in the instructions. During collection eac Questioner is solicited for up to three questions about an article. They are provided with positive an negative examples to prompt and guide them (detailed instructions are shown in Figure[3)."}, {"section_index": "7", "section_name": "3.3 ANSWER SOURCING", "section_text": "A second set of crowdworkers (Answerers) provide answers. Although this separation of questio. and answer increases the overall cognitive load, we hypothesized that unburdening Questioners it. this way would encourage more complex questions. Answerers receive a full article along with. crowdsourced question and are tasked with determining the answer. They may also reject the questioi. as nonsensical, or select the null answer if the article contains insufficient information. Answers ar submitted by clicking on and highlighting words in the article while instructions encourage the se. of answer words to consist in a single continuous span (again, we give an example prompt in the. Appendix). For each question we solicit answers from multiple crowdworkers (avg. 2.73) with the. aim of achieving agreement between at least two Answerers.."}, {"section_index": "8", "section_name": "3.4 VALIDATION", "section_text": "Crowdsourcing is a powerful tool but it is not without peril (collection glitches; uninterested or. malicious workers). To obtain a dataset of the highest possible quality we use a validation process. that mitigates some of these issues. In validation, a third set of crowdworkers sees the full article, a question, and the set of unique answers to that question. We task these workers with choosing the. best answer from the candidate set or rejecting all answers. Each article-question pair is validated by. an average of 2.48 crowdworkers. Validation was used on those questions without answer-agreement. after the previous stage, amounting to 43.2% of all questions."}, {"section_index": "9", "section_name": "3.5 ANSWER MARKING AND CLEANUP", "section_text": "After validation, 86.0% of all questions in NewsQA have answers agreed upon by at least two separate crowdworkers-either at the initial answer sourcing stage or in the top-answer selection. This. improves the dataset's quality. We choose to include the questions without agreed answers in the. corpus also, but they are specially marked. Such questions could be treated as having the null answei. and used to train models that are aware of poorly posed questions..\nAs a final cleanup step we combine answer spans that are less than 3 words apart (punctuation is discounted). We find that 5.68% of answers consist in multiple spans, while 71.3% of multi-spans are within the 3-word threshold. Looking more closely at the data reveals that the multi-span answers often represent lists. These may present an interesting challenge for comprehension models moving forward."}, {"section_index": "10", "section_name": "4 DATA ANALYSIS", "section_text": "FollowingRajpurkar et al.(2016), we categorize answers based on their linguistic type (see Table[1 This categorization relies on Stanford CoreNLP to generate constituency parses, POS tags, and NER\nWe provide a thorough analysis of NewsQA to demonstrate its challenge and its usefulness as a machine comprehension benchmark. The analysis focuses on the types of answers that appear in the dataset and the various forms of reasoning required to solve it|1\nTable 1: The variety of answer types appearing in NewsQA, with proportion statistics and examples\ntags for answer spans (see Rajpurkar et al.(2016) for more details). From the table we see that the majority of answers (22.2%) are common noun phrases. Thereafter, answers are fairly evenly spreac among the clause phrase (18.3%), person (14.8%), numeric (9.8%), and other (11.2%) types. Clearly answers in NewsOA are linguistically diverse\nThe proportions in Table|1|only account for cases when an answer span exists. The complement of this set comprises questions with an agreed null answer (9.5% of the full corpus) and answers without agreement after validation (4.5% of the full corpus).."}, {"section_index": "11", "section_name": "4.2 REASONING TYPES", "section_text": "The forms of reasoning required to solve NewsQA directly influence the abilities that models wil learn from the dataset. We stratified reasoning types using a variation on the taxonomy presented. by Chen et al.(2016) in their analysis of the CNN/Daily Mail dataset. Types are as follows, ir ascending order of difficulty:\nFor both NewsOA and SOuAD. we manually labelled 1.000 examples (drawn randomly from the respective development sets) according to these types and compiled the results in Table[2l Some examples fall into more than one category, in which case we defaulted to the more challenging type. We can see from the table that word matching, the easiest type, makes up the largest subset in both datasets (32.7% for NewsQA and 39.8% for SQuAD). Paraphrasing constitutes a much larger proportion in SQuAD than in NewsQA (34.3% vs 27.0%), possibly a result from the explicit encouragement of lexical variety in SQuAD question sourcing. However, NewsQA significantly outnumbers SQuAD on the distribution of the more difficult forms of reasoning: synthesis and inference make up 33.9% of the data in contrast to 20.5% in SQuAD.\nWe test the performance of three comprehension systems on NewsQA: human data analysts and two neural models. The first neural model is the match-LSTM (mLSTM) system of|Wang & Jiang\nAnswer type Example Proportion (%) Date/Time March 12, 2008 2.9 Numeric 24.3 million 9.8 Person Ludwig van Beethoven 14.8 Location Torrance, California 7.8 Other Entity Pew Hispanic Center 5.8 Common Noun Phrase federal prosecutors 22.2 Adjective Phrase 5-hour 1.9 Verb Phrase suffered minor damage 1.4 Clause Phrase trampling on human rights 18.3 Prepositional Phrase in the attack 3.8 Other nearly half 11.2\n1. Word Matching: Important words in the question exactly match words in the immediate context of an answer span such that a keyword search algorithm could perform well on this subset. 2. Paraphrasing: A single sentence in the article entails or paraphrases the question. Para phrase recognition may require synonymy and word knowledge. 3. Inference: The answer must be inferred from incomplete information in the article or by recognizing conceptual overlap. This typically draws on world knowledge. 4. Synthesis: The answer can only be inferred by synthesizing information distributed across multiple sentences. 5. Ambiguous/Insufficient: The question has no answer or no unique answer in the article\nProportion (%) Reasoning Example NewsQA SQuAD Word Matching Q: When were the findings published?. 32.7 39.8 S: Both sets of research findings were published Thursday.... Paraphrasing Q: Who is the struggle between in Rwanda?. 27.0 34.3 S: The struggle pits ethnic Tutsis, supported by Rwanda, against ethnic Hutu, backed by Congo.. Inference Q: Who drew inspiration from presidents?. 13.2 8.6 S: Rudy Ruiz says the lives of US presidents can make them positive role models for students. Synthesis Q: Where is Brittanee Drexel from? 20.7 11.9 S: The mother of a 17-year-old Rochester, New York high school student ... says she did not give her daughter permission to go on the trip. Brittanee Marie Drexel's mom says.... Ambiguous/Insufficient Q: Whose mother is moving to the White House?. 6.4 5.4 S: ... Barack Obama's mother-in-law, Marian Robinson, will join the Obamas at the family's private quarters at 1600 Pennsylvania Avenue. [Michelle is never mentioned].\n[2016b). The second is a model of our own design that is computationally cheaper. We describe these models below but omit the personal details of our analysts. Implementation details of the models are described in Appendix|A"}, {"section_index": "12", "section_name": "5.1 MATCH-LSTM", "section_text": "There are three stages involved in the mLSTM model. First, LSTM networks encode the documen and question (represented by GloVe word embeddings (Pennington et al.[2014)) as sequences o1. hidden states. Second, an mLSTM network (Wang & Jiang2016a) compares the document encodings. with the question encodings. This network processes the document sequentially and at each toker uses an attention mechanism to obtain a weighted vector representation of the question; the weightec combination is concatenated with the encoding of the current token and fed into a standard LSTM Finally, a Pointer Network uses the hidden states of the mLSTM to select the boundaries of the. answer span. We refer the reader to|Wang & Jiang(2016a b) for full details. At the time of writing. mLSTM is state-of-the-art on SQuAD (see Table[3) so it is natural to test it further on NewsOA..\nEncoding All words in the document and question are mapped to real-valued vectors using the A bidirectional GRU network (Bahdanau et al.[2015) takes in d; and encodes contextual states h; E RD1 for the document. The same encoder is applied to q, to derive contextual states k; E IRD1 for the question3\nTc E RDixD1 gij E RC\nwhich we use to produce an (n m C)-dimensional tensor of annotation scores, G = [gi]. We take the maximum over the question-token (second) dimension and call the columns of the resulting\nTable 2: Reasoning mechanisms needed to answer questions. For each we show an example question with the sentence that contains the answer span, with words relevant to the reasoning type in bold. and the corresponding proportion in the human-evaluated subset of both NewsQA and SQuAD (1,000 samples each).\nThe match-LSTM is computationally intensive since it computes an attention over the entire question at each document token in the recurrence. To facilitate faster experimentation with NewsQA we developed a lighter-weight model (BARB) that achieves similar results on SQuAd? Our model. consists in four stages:\nBilinear Annotation Next we compare the document and question encodings using a set of C\nmatrix g E RC. We use this matrix as an annotation over the document word dimension. Contrasting the multiplicative application of attention vectors, this annotation matrix is to be concatenated to the encoder RNN input in the re-encoding stage\nRe-encoding For each document word, the input of the re-encoding RNN (another biGRU network consists of three components: the document encodings hi, the annotation vectors gi, and a binary feature qi indicating whether the document word appears in the question. The resulting vectors f, = [hi; gi; qi] are fed into the re-encoding RNN to produce D2-dimensional encodings e; as input in the boundary-pointing stage..\nWe also provide an intermediate level of \"guidance\"' to the annotation mechanism by first reducing the feature dimension C in G with mean-pooling, then maximizing the softmax probabilities in the resulting (n-dimensional) vector corresponding to the answer word positions in each document. This auxiliary task is observed empirically to improve performance.\nWe tested four English speakers (three native and one near-native) on a total of 1,000 questions from the NewsQA development set. As given in Table[3] they averaged O.694 in F1, which likely represents a ceiling for machine performance. Our students' exact match (EM) scores are relatively low at O.465 This is because in many cases there are multiple ways to select semantically equivalent answers, e.g '1996\"' versus \"in 1996'. We also compared human performance on the answers that had agreemen with and without validation, finding a difference of only 1.4 percentage points F1. This suggests our validation stage yields good-quality answers.\nThe original SQuAD evaluation of human performance compares separate answers given by crowd. workers; for a closer comparison with NewsQA, we replicated our human test on the same number of validation data (1,ooo). We measured their answers against the second group of crowdsourced. responses in SQuAD's development set, as in Rajpurkar et al.[(2016). Our students scored O.807 in F1."}, {"section_index": "13", "section_name": "6.2 MODEL PERFORMANCE", "section_text": "Performance of the baseline models and humans is measured by EM and F1 with the official evaluation script from SQuAD and listed in Table[3] Unless otherwise stated, hyperparameters are determined by hyperopt (Appendix A). The gap between human and machine performance on NewsQA is a striking 0.198 points F1 - much larger than the gap on SQuAD (0.098) under the same human evaluation scheme. The gaps suggest a large margin for improvement with automated methods.\nFigure[1stratifies model (BARB) performance according to answer type (left) and reasoning type (right) as defined in Sections4.1|and4.2] respectively. The answer-type stratification suggests that\n4All experiments in this section use the subset of NewsQA dataset with answer agreements (92,549 sample for training, 5,166 for validation, and 5,126 for testing). We leave the challenge of identifying the unanswerabl questions for future work.\nBoundary pointing Finally, we search for the boundaries of the answer span using a convolutional. network (in a process similar to edge detection). Encodings e; are arranged in matrix E E RD2n E is convolved with a bank of n filters, Ff E RD2xw, where w is the filter width, k indexes the. different filters, and l indexes the layer of the convolutional network. Each layer has the same number of filters of the same dimensions. We add a bias term and apply a nonlinearity (ReLU) following. each convolution, with the result an (n t n)-dimensional matrix Be..\nWe use two convolutional layers in the boundary-pointing stage. Given Bi and B2, the answer span's start- and end-location probabilities are computed using p(s) exp (vT'B1 + bs) and p(e) exp (vI'B2 + be), respectively. We also concatenate p(s) to the input of the second convolutional layer (along the n -dimension) so as to condition the end-boundary pointing on the start-boundary Vectors vs, Ve E Rnf and scalars bs, be E R are trainable parameters.\nTable 3: Performance of several methods and humans on the SOuAD and NewsOA datasets. Su perscript 1 indicates the results are taken from|Rajpurkar et al.(2016), and 2 from|Wang & Jiang (2016b).\nFigure 1: Left: BARB performance (F1 and EM) stratified by answer type on the full development set of NewsQA. Right: BARB performance (F1) stratified by reasoning type on the human-assessed subset on both NewsQA and SQuAD. Error bars indicate performance differences between BARB and human annotators.\nthe model is better at pointing to named entities compared to other types of answers. The reasoning. type stratification, on the other hand, shows that questions requiring inference and synthesis are. not surprisingly, more difficult for the model. Consistent with observations in Table 3] stratified performance on NewsQA is significantly lower than on SQuAD. The difference is smallest on worc matching and largest on synthesis. We postulate that the longer stories in NewsQA make synthesizing information from separate sentences more difficult, since the relevant sentences may be farther apart. This requires the model to track longer-term dependencies..\nWe propose a simple sentence-level subtask as an additional quantitative demonstration of the relative difficulty of NewsQA. Given a document and a question, the goal is to find the sentence containing the answer span. We hypothesize that simple techniques like word-matching are inadequate to thi task owing to the more involved reasoning required by NewsQA\nWe employ a technique that resembles inverse document frequency (idf), which we call inverse. sentence frequency (isf). Given a sentence S, from an article and its corresponding question Q, the. isf score is given by the sum of the idf scores of the words common to S, and Q (each sentence is. treated as a document for the idf computation). The sentence with the highest isf is taken as the answer sentence S*, that is,\nThe isf method achieves an impressive 79.4% sentence-level accuracy on SQuAD's development set but only 35.4% accuracy on NewsQA's development set, highlighting the comparative difficulty of the latter. To eliminate the difference in article length as a possible cause of the performance difference. we also artificially increased the article lengths in SQuAD by concatenating adjacent SQuAD articles from the same Wikipedia document. Accuracy decreases as expected with the increased SQuAD article length, yet remains significantly higher than that on NewsQA with comparable or even larger. article length (Table4).\nZOlOby SQuAD Exact Match F1 NewsQA Exact Match F1 Model Dev Test Dev Test Model Dev Test Dev Test Random1 0.11 0.13 0.41 0.43 Random 0.00 0.00 0.30 0.30 mLSTM2 0.591 0.595 0.700 0.703 mLSTM 0.344 0.349 0.496 0.500 BARB 0.591 0.709 - BARB 0.361 0.341 0.496 0.482 Human1 0.803 0.770 0.905 0.868 Human 0.465 0.694 Human (ours) 0.650 0.807 Date/time Word Matching Numeric Person Paraphrasing Adjective Phrase Location Prepositional Phrase Inference Common Noun Phrase Other Synthesis Other entity Clause Phrase Ambiguous/ NewsQA F1 Insufficient SQuAD Verb Phrase EM 0 0.2 0.4 0.6 0.8 0.000 0.150 0.300 0.450 0.600 0.750 0.900\nS* = arg max isf(w). 2 wES;nQ\nTable 4: Sentence-level accuracy on artificially-lengthened SQuAD documents\nSQuAD NewsQA # documents 1 3 5 7 9 1 Avg # sentences 4.9 14.3 23.2 31.8 40.3 30.7 isf 79.6 74.9 73.0 72.3 71.0 35.4"}, {"section_index": "14", "section_name": "7 CONCLUSION", "section_text": "We have introduced a challenging new comprehension dataset: NewsQA. We collected the 100,o00+ examples of NewsQA using teams of crowdworkers, who variously read CNN articles or highlights posed questions about them, and determined answers. Our methodology yields diverse answer types and a significant proportion of questions that require some reasoning ability to solve. This makes the corpus challenging, as confirmed by the large performance gap between humans and deep neural models (0.198 in F1). By its size and complexity, NewsQA makes a significant extension to the existing body of comprehension datasets. We hope that our corpus will spur further advances in machine comprehension and guide the development of literate artificial intelligence."}, {"section_index": "15", "section_name": "ACKNOWLEDGMENTS", "section_text": "The authors would like to thank Caglar Gulcehre, Sandeep Subramanian and Saizheng Zhang fo. helpful discussions, and Pranav Subramani for the graphs"}, {"section_index": "16", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointl learning to align and translate. ICLR. 2015.\nFrancois Chollet. keras. https: //github. com/fchollet/keras 2015.\nXavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Aistats, volume 9, pp. 249-256, 2010.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. ICLR, 2015\nRazvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. ICML (3), 28:1310-1318, 2013.\nJeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for worc representation. In EMNLP, volume 14, pp. 1532-43, 2014\nKarl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pp. 1684-1692, 2015.\nFelix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Reading children's books with explicit memory representations. ICLR, 2016..\nMrinmaya Sachan, Avinava Dubey, Eric P Xing, and Matthew Richardson. Learning answerentailing structures for machine comprehension. In Proceedings of ACL, 2015.\nAndrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120, 2013.\nHai Wang, Mohit Bansal, Kevin Gimpel, and David McAllester. Machine comprehension with syntax frames, and semantics. In Proceedings of ACL, Volume 2: Short Papers, pp. 700, 2015..\nShuohang Wang and Jing Jiang. Learning natural language inference with lstm. NAACL, 2016a\nShuohang Wang and Jing Jiang. Machine comprehension using match-lstm and answer pointer. arXi preprint arXiv:1608.07905, 2016b\nMatthew Richardson, Christopher JC Burges, and Erin Renshaw. Mctest: A challenge dataset for the open-domain machine comprehension of text. In EMNLP, volume 1, pp. 2, 2013.\nAdam Trischler, Zheng Ye, Xingdi Yuan, Jing He, Philip Bachman, and Kaheer Suleman. A parallel. hierarchical model for machine comprehension on sparse data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 2016a.\nAdam Trischler, Zheng Ye, Xingdi Yuan, and Kaheer Suleman. Natural language comprehension with the epireader. In EMNLP, 2016b"}, {"section_index": "17", "section_name": "IMPLEMENTATION DETAILS", "section_text": "Both mLSTM and BARB are implemented with the Keras framework (Chollet 2015) using the Theano (Bergstra et al.]2010) backend. Word embeddings are initialized using GloVe vectors (Pennington et al.||2014) pre-trained on the 840-billion Common Crawl corpus. The word embeddings are not updated during training. Embeddings for out-of-vocabulary words are initialized with zero.\nParameter tuning is performed on both models using hype ropt 5 For each model, configuration for the best observed performance are as follows:"}, {"section_index": "18", "section_name": "mLSTM", "section_text": "Both the pre-processing layer and the answer-pointing layer use bi-directional RNN with a hidden size of 192. These settings are consistent with those used by[Wang & Jiang(2016b)\nIn the answer-pointing layer, V and Wa are initialized with O, ba and v are initialized with N, and c is initialized as 1."}, {"section_index": "19", "section_name": "BARB", "section_text": "Here we present the user interfaces used in question sourcing, answer sourcing, and question/answe. validation.\nFor both models, the training objective is to maximize the log likelihood of the boundary pointers Optimization is performed using stochastic gradient descent (with a batch-size of 32) with the ADAM optimizer (Kingma & Ba]2015). The initial learning rate is 0.003 for mLSTM and 0.0005 for BARB The learning rate is decayed by a factor of O.7 if validation loss does not decrease at the end of each epoch. Gradient clipping (Pascanu et al. 2013) is applied with a threshold of 5.\nModel parameters are initialized with either the normal distribution (W(0, 0.05)) or the orthogonal initialization (O, Saxe et al. 2013) in Keras. All weight matrices in the LSTMs are initialized with O In the Match-LSTM layer, Wq, WP, and Wr are initialized with O, bP and w are initialized with N,. and b is initialized as 1..\nFor BARB, the following hyperparameters are used on both SQuAD and NewsQA: d = 300, D1 = 128, C = 64, D2 = 256, w = 3, and n s = 128. Weight matrices in the GRU, the bilinear models, as well as the boundary decoder (vs and ve) are initialized with O. The filter weights in the boundary. decoder are initialized with glorot_uniform (Glorot & Bengio 2010] default in Keras). The bilinear biases are initialized with W, and the boundary decoder biases are initialized with O..\nHighlights . Three women to jointly receive the 2011 Nobel Peace Prize Prize recognizes non-violent struggle of safety of women and women's rights . Prize winners to be honored with a concert on Sunday hosted by Helen Mirren Q1: Who were the prize winners? Q2: What country were the prize winners from? Q3: Write a question that relates to a highlight Question What is the age of Patrick McGoohan? Click here if the question does not make sense or is not a question. Story (CNN) -- Emmy-winning Patrick McGoohan, the actor who created one of British television's most surreal thrillers, has died aged 80, according to British media reports. Fans holding placards of Patrick McGoohan recreate a scene from 'The Prisoner' to celebrate the 4Oth anniversary of the show in 2007. The Press Association, quoting his son-in-law Cleve Landsberg, reported he died in Los Angeles after a short illness. McGoohan, star of the 1960s show'The Danger Man, is best remembered for writing and starring in 'The Prisoner' about a former spy locked away in an isolated village who tries to escape each episode. Question When was the lockdown initiated? Select the best answer: O Tucson, Arizona, O10:30 a.m. -- O 11 a.m., O * All answers are very bad. O * The question doesn't make sense. Story (for your convenience) (CNN) -- U.S. Air Force officials called off their response late Friday afternoon at a Tucson, Arizona, base after reports that an. armed man had entered an offce building, the U.S. military branch said in a statement. Earlier in the day, a U.S. military official told CNN that a gunman was believed to be holed up in a building at the Davis-Monthan Air Force Base. This precipitated the Air Force. to call for a lock-down -- which began at 10:30 a.m. -- \"following the unconfirmed sighting of\" such a man. No shots were ever fired. and law enforcement teams are on site, said the official, who had direct knowledge of the situation from conversations with base officials but did not want to be identified. In fact, at 6 p.m., Col. John Cherrey -- who commands the Air Force's 355th Fighter Wing. -- told reporters that no gunman or weapon was ever found. He added that the building, where the gunman was once thought to\nFigure 2: Examples of user interfaces for question sourcing, answer sourcing, and validation\nThe questions can refer directly to the highlights, for example\nFigure 3: Question sourcing instructions for the crowdworkers"}] |
HkpLeH9el | [{"section_index": "0", "section_name": "NEURAL FUNCTIONAL PROGRAMMING", "section_text": "John K. Feser\nMassachusetts Institute of Technology"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Inductive Program Synthesis (IPs), i.e., the task of learning a program from input/output examples is a fundamental problem in computer science. It is at the core of empowering non-experts to use computers for repeated tasks, and recent advances such as the FlashFill extension of Microsof Excel (Gulwani2011) have started to deliver on this promise."}, {"section_index": "2", "section_name": "2 BACKGROUND", "section_text": "The basic building block of functional programs is the function, and programs are built by composing functions together. In the following, we highlight some common features in functional programs before discussing how to integrate them into an end-to-end differentiable model in Sect.3\nMarc Brockschmidt. Alexander L. Gaunt. Daniel Tarlow\nmabrocks,t-algaun,dtarlow}@microsoft.com"}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "We discuss a range of modeling choices that arise when constructing an end-to-end differentiable programming language suitable for learning programs from input. output examples. Taking cues from programming languages research, we study the effect of memory allocation schemes, immutable data, type systems, and built. in control-flow structures on the success rate of learning algorithms. We build a range of models leading up to a simple differentiable functional programming. language. Our empirical evaluation shows that this language allows to learn far. nore programs than existing baselines.\nA related line of research is the extension of neural network architectures with components that correspond to hardware primitives (Giles et a1.11989 Graves et al.[2014 Weston et al.. 2015; Joulin & Mikolov2015}Grefenstette et al. 2015 Kurach et al 2016 Kaiser & Sutskever 2016] Reed & de Freitas 2016 Andrychowicz & Kurach]2016]Zaremba et al.2016] Graves et al.[ 2016), en- abling them to learn program-like behavior. However, these models are usually tightly coupled to. the idea of a differentiable interpretation of computer hardware, as names such as Neural Turing. Machine (Graves et al.[2014), Neural Random-Access Machine (Kurach et al.]2016), and Neural. GPU (Kaiser & Sutskever!2016) indicate. We observe that while such architectures form the basis modern computing, they are usually not the models that are used to program computers. Instead. decades of programming languages research have lead to ever higher programming languages that. aim to make programming simpler and less error-prone. Indeed, as recent comparisons show (Gaunt. et al.||2016b), program synthesis methods from the programming languages community that actively exploit such constructs, e.g. by leveraging known semantics of loops, are currently achieving consid. erably better results than comparable neural architectures. Still, neural IPS techniques are clearly at. an advantage when extending the problem setting from simple integer input/output examples to more. complex cases, such as IPS problems with perceptual data (Gaunt et al.I 2016a), imprecise examples or leveraging additional cues such as a natural language description of the desired program..\nHence, we propose to adapt features of modern high-level programming languages to the differen-. tiable setting. In this paper, we develop an end-to-end differentiable programming language operating. on integers and lists, taking cues from functional programming. In our empirical evaluation, we show the effects on learning performance of our four modeling recommendations, namely automatic mem-. ory management, the use of combinators and if-then-else constructs to structure program control flow, immutability of data, and an application of a simple type system. Our experiments show that. each of these features crucially improves program learning over existing baselines..\nImmutable Data Functions are expected to behave like their mathematical counterparts, avoiding mutable data and side effects. This helps programmers reason about their code, as it eliminates the possibility that a variable might be left uninitialized or accessed in an inconsistent state. Moreover no data is ever \"lost' by being overwritten or mutated.\nTypes Expressive type systems are used to protect programmers from writing programs that wil fail. Practically, a type checker is able to rule out many syntactically correct programs that are certair to fail at runtime, and thus restricts the space of valid programs. Access to types helps programmers to reason about the behavior of their code. In particular, the type system tells the programmer wha kinds of data they can expect each variable to contain.\nStructured Control Flow A key difference between hardware-level assembly languages and higher-level programming languages is that higher-level languages structure control flow using loops. conditional statements, and procedures, as raw got os are famously considered harmful (Dijkstra 1968). Functional languages go a step further and leverage higher-order functions to abstract over. common control flow patterns such as iteration over a recursive data structure. In an imperative language, such specialized control flow is often repeated and mixed with other code."}, {"section_index": "4", "section_name": "3 OUR MODeLS", "section_text": "We first discuss the general format of our programs and program states, which we will refine step by step. Our programs operate on states consisting of an instruction pointer indicating the next instruction to execute, a number of registers holding inputs and intermediate results of executec instructions, and a heap containing memory allocated by the program. We focus on list-manipulating programs, so we create a heap consisting of standard cons cells, which are data and pointer value pairs where the pointer points to another cons cell or the special ni1 value. To represent a linked list, each cell points to the next cell in the list, except for the last cell, which points to ni 1."}, {"section_index": "5", "section_name": "3.1 PROGRAM AND DATA REPRESENTATION", "section_text": "We define our models by lifting simple instructions to the differentiable setting. To do so, we bounc the domain of all values and parameters, following earlier work (e.g. (Graves et al.|2014)Kurach et al.]|2016f[Gaunt et al.2016b)). We represent a value v from a domain {d1 ... dD} as a tuple RD interpreted as a discrete probability distribution. We pick a maximal integer value M that bounds all values occurring in our programs, a number of instructions I, and a number of registers R. In this setting, the size of the heap memory H has to be equal to the maximal integer value M, but we wil relax this later. We limit the length of programs to some value P, and can then encode programs as o(), a(p), a(p) E 1, R| its output and argument registers respectively. To \"execute\"' such a program we unroll it for T timesteps and keep a program state s(t) = (p(t), r(t) . r(t)b(t) fO R each timestep t, where p(t) e [1, P] is an instruction pointer indicating which instruction to execute next, r+) (t) are the values of registers, and h(t) are the values of the cons cells in the heap.\nAll of our models share a basic instruction set, namely the cons cell constructor cons, the heap. accessors (car & cdr) which return the data (resp. pointer) element of a cons cell, integer addition increment and decrement (add, inc, dec), integer equality and greater-than comparison (eq & gt).. Boolean conjunction and disjunction (and & or), common constants (zero & one), and finally a.\nMemory Management Most modern programming languages eschew manual memory manage ment and pointer manipulation where possible. Instead, creation of heap objects automatically gener ates an appropriate pointer to fresh memory. Similarly, built-in constructs allow access to fields of objects, instead of requiring pointer arithmetic. Both of these choices move program complexity into the fixed implementation of a programming language, making it easier to write correct programs.\nIn the following, we will discuss a range of models, starting with a simple assembly-like language. and progressing to a differentiable version of a simple functional programming language. We make four modeling recommendations whose effect we demonstrate in our experiments in Sect.4.\ns(t+1) =[p(t)=p][o(P)=o] [i(p)=i] [a(P)=a1] [aP)=a2] n(s(t)(o,i,a1, a2) pE[1,P],iE[1,I], 0,a1, a2 E[1, R]\nIn practice, we developed our models in TerpreT (Gaunt et al.| 2016b), which hides these technicali ties.\nsuch that program \"evaluation' according to (1) starting on a state s(o) initialized to an example input yields the target output in s(T). For scalar outputs such as a sum of values, our objective is simply to minimize (T). probability mass on the correct output value.\nif i =1 R [a=a]dn(T) ai. Vi = otherwise aE[1,H][ai-1=a].Pn(T) aE[1,H]\nThe probability that the computed output list is equal to an expected output list [1, ..., k] is ther [ak+1 = 0] ;=1[Vi = Ui]\nMemory Management As the programs we want to learn need to construct new lists, we need a memory allocation mechanism that provides fresh cells. We explored two options for this allocator\nFirst, we attempt to follow stack-allocation models in which a stack of memory cells is used with. a stack pointer sp which always points to the next free memory cell. We fix a maximum stack size. H. Whenever a memory cell is allocated (i.e., a cons instruction is executed), the stack pointer is. incremented, guaranteeing that no cell is ever overwritten. However, uncertainty about whether an. instruction is cons translates into uncertainty about the precise value of the stack pointer, as each call to cons changes sp. This uncertainty causes cells holding results from different instructions in the stack to blur together, despite the fact that cells are immutable once created. As an example. consider the execution of two instructions, where the first is cons 1 0 with probability 0.5 and. noop otherwise, and the second is cons 2 0 with probability 0.5 and noop otherwise. After. executing starting with sp = 1 and an empty stack, the value of sp will be blurred across three values. 1, 2 and 3 with probabilities 0.25, 0.5 and 0.25. Similarly, the value of the first heap cell will be 0. (the default) with probability 0.25, 1 with probability 0.5 and 2 with probability 0.25. This blurring. effect becomes stronger with longer programs, and we found that it substantially impacted learning.\nBoth of these problems can be solved by transitioning to a fully immutable representation of the heap. In this variant, we allocate and initialize one heap cell per timestep, i.e., we set H = T. If the\nnoop instruction. These all have the usual semantics as transformers on the program state, and we will discuss the behavior of cons in detail later. For example, executing (o, add, a1, a2) on a state at timestep t yields the following registers at the next timestep, where the addition operation is lifted to operate on distributions over natural numbers.\nmod M if u = 0 Vu E[1,R] otherwise.\nAs we allow all involved quantities to be distributions over all possible choices, computing the next state requires a case analysis for all allowed values. The new state is then obtained by averaging the results of all possible execution steps, weighted by the probabilities assigned to each choice. Thus, if n(s(t), (o, i, a1, a2)) computes the state obtained by executing the instruction (o, i, a1, a2), we can compute the next state s(t+1) as follows, where x = n] denotes the probability that a variable x encoding a discrete probability distribution assigns to the value n.\nt+1 p()=po(P)=o:i(P)=ill a2nst)o,i,a1,a2)) (1) pE[1,P],iE[1,I], 0,a1,a2E[1,R]\nHandling list outputs is more complex, as many valid outputs exist (depending on how list elements are placed in the heap memory). Intuitively, we traverse the heap from the returned heap address until reaching the end of a linked list, recording the list elements as we go. To formalize this intuition let dn(T) (resp. P h(T)) denote the data (resp. pointer) information in the heap cell at address k at the final state of the evaluation. We then compute the traversal sequences of list element values V1, ..., VH and addresses a1, ..., aH as follows.\ncurrent instruction is a cons, the appropriate values are filled in, otherwise both data and pointer. value are set to a default value (in our case, O). This eliminates the issue of blurring between outputs. of different instructions. The values of a cons cell may still be uncertain as they inherit uncertainty about the executed instructions and the values of arguments, but depend only on the operations at one timestep. While this modification requires a larger domain to store pointers, we found not copying the stack significantly reduces memory usage during training of our models..\nRecommendation (F): Use fixed heap n nemory allocation deterministically controlled by the mode"}, {"section_index": "6", "section_name": "3.2 PROGRAM MODELS", "section_text": "Our baseline program model corresponds closely to an assem-. 1 : bly language as used in earlier work (Bunel et al.2016), re-. Out instr arg 1 arg2 branch sulting in a program model as shown on the right, where boxes. 2 : correspond to learnable parameters. We extend our instruction. set with jump-if-zero (jz), jump-if-not-zero (jnz) and return instructions. Our assembly pro. gram representation also includes a \"branch' parameter b specifying the new value of the instruction. pointer for a successful conditional branch. To learn programs in this language, the model must. learn how to create the control flow that it needs using these simple conditional jumps. Note that the instruction pointer suffers from the same problems as the stack pointer above, i.e., uncertainty about. its value blurs together the effects of many possible program executions..\nStructured Control Flow. We see structured con- trol flow as a way to reduce the \"bleeding\" of uncer-. tainty about the value of the instruction pointer into. the values of registers and cells on the heap. To in-. troduce structured control flow, we replace raw jumps. with an if-then-else instruction and an explicit. foreach loop that is suited for processing lists. We. restrict our model to a prefix of instructions, a loop. which iterates over a list. and a suffix of instructions.. The parameters for instructions in the loop can access. an additional register that contains the value of the currer. practice, we unroll the loop for a fixed number of iteratic. input, which ensures that every input list can be processe. each timestep becomes deterministic. removing uncertai\nalready heipl acc the model to 1 idx 0 reg aggregating r foreach ele in In functional out instr arg1 arg2 cond encapsulated place the sim reg creates a new acc 1 input list, zi idx idx + 1 out lists in parall acc Fold\nand applying a function to the current list element and the value computed so far. A program mode using the foldli combinator is shown on the left. The i suffix indicates that these combinators additionally provide the index of the current list element (the precise semantics of the combinators are presented in Sect.A.1).\nRecommendation (L): Instead of raw ju , use loop and if-then-else templates\nOut instr arg1 arg2 branch ine program model corresponds closely to an assem- 1 : age as used in earlier work (Bunel et al.]2016), re- Out instr arg1 arg2 branch a program model as shown on the right, where boxes 2 : d to learnable parameters. We extend our instruction\nout instr arg1 arg2 cond pre2: reg foreach ele in out instr arg1 arg2 cond loop1: loop2: out instr arg1 arg2 cond\nd Control Flow We see structured con- Out instr arg1 arg2 cond as a way to reduce the \"bleeding\"' of uncer. pre1: ut the value of the instruction pointer into pre2: reg of registers and cells on the heap. To in- foreach ele in ructured control flow, we replace raw jumps Out instr arg1 arg2 cond loop1: f-then-else instruction and an explicit loop that is suited for processing lists. We loop2: r model to a prefix of instructions, a loop Out instr arg1 arg2 cond suf 1: ates over a list, and a suffix of instructions.\nFor the if-then-else instruction, we extend the instruction representation with a \"condition' parameter c E [1, R] and let the evaluation of if-then-e1se yield its first argument when the. register c is non-zero and the second argument otherwise. An overview of the structure of such programs is displayed above.\nImmutable Data In training our models, we observed that many random initializations of the. program parameters would overwrite input data or important intermediate results, and later steps. would not be able to recover this data. In models with combinators that provide a way to accumulate result values, we can sidestep this issue by making registers immutable. To do so, we create one. register per timestep, and fix the output of each instruction to the register for its timestep. Parameters for arguments then range over all registers initialized in prior timesteps, with an exception for the. closures executed by a combinator. Here, each instruction only gets access to the inputs to the closure. values computed in the prefix, and registers initialized by preceding instructions in the same loop. iteration. As in the heap allocation case, we can avoid keeping a copy of all registers for every. timestep, and instead share these values over all steps, reducing memory usage..\nTypes When training our models, we found that for many initializations, training would fall into. local minima corresponding to ill-typed programs, e.g., where references to the heap would be used in integer additions. We expect the learned program to be well-typed, so we introduce a simple type system. We explored two approaches to adding a type system.\nA first attempt integrated the well-typedness of the program into our objective function. In our programs, we use three simple types of data---integers, pointers and booleans-as well as a special type, , which represents type errors. We extended the program state to contain an additional element t r for each register, encoding its type. Each instruction then not only computes a value that is assigned to the target register, but also a type for the target register. Most significantly, if one of the arguments has an unsuitable type (e.g., an integer in place of a pointer), the resulting type is . We then extended our objective function to add a penalty for values with type . Unfortunately, this changed objectiv. function had neither a positive nor negative effect on our experiments, so it seems that optimizing foi the correct type is redundant when we are already optimizing for the correct return value.\nIn our second attempt, rather than penalizing ill-typed programs, we prevent programs from accessing. ill-typed data by construction. We augment our register representation by adding an integer, pointer. and Boolean slot to each register, so each register can hold a separate value of each type. Instruction. which read from registers now read from the slot corresponding to the type of the argument. Wher. writing to a register, we write to the slot corresponding to the instruction's return type, and set the. other slots to a default value O. This prevents any ill-typed sequence of instructions, i.e., it is now. impossible to, for example, increment a pointer value or to construct a cons cell with a non-pointer. value. Furthermore, this modification allows us to set the heap size H to a value different fron. the maximal integer M. Our experiments in Sect.4.3 show that separating differently-typed values. simplifies the learning of programs that operate on lists and integers at the same time..\nRecommendation (T): Use different storage for data of different types\nWe have empirically evaluated our modeling recommendations on a selection of program inductio. tasks of increasing complexity, ranging from straight-line programs to problems with loops anc conditional expressions. All of our models are implemented in TerpreT (Gaunt et al.] 2016b) anc we learn using TerpreT's TeNsORFLOw (Abadi et al.]2015) backend. We aim to release Terpre] together with these models, under an open source license in the near future.\nFor all tasks, three groups of five input/output example pairs were sampled as training data and another 25 input/output pairs as test data. For each group of five examples, training was started from 100 random initializations of the model parameters. After training for 3500 epochs (tests with longer training runs showed no significant changes in the outcomes), the learned programs were tested by discretizing all parameters and comparing program outputs on test inputs with the expected values. We perform 300 runs per model and task, and report only the ratio of successful runs. A run is successful if the discretized program returns the correct result on all five training and 25 test examples|1|The ratio of runs converging to zero loss on the training examples was within 1% of the number of successful runs, i.e., very few found solutions failed to generalize.\n1We inspected samples of the obtained programs as well and verified that they were indeed correct solutions See Sect.[A.2|for some of the learned programs.\ndupK getK 1.0 1.0 C+T+I C+I 0.8 0.8 8 C+T rate rate 0.6 0.6 S Suueess s suueess 4A 0.4 0.4 A+F + A+L 0.2 0.2 T12 0.0 0.0 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 k k\nWe performed a cursory exploration of hyperparameter choices. We varied the choice of optimization algorithm (Momentum, Adam, RMSProp), the learning rate (from 0.001 to 5), gradient noise (testing the recommended choices from|Neelakantan et al. (2016b)), a decaying entropy bonus (starting from 0.001 to 20), and gradient clipping (to values between 0.1 and 10). We sampled 100 hyperparameter settings from this space and tested their effect on two simple tasks. We ran the remaining experiments with the best configuration obtained by this process: the RMSProp optimization algorithm, a learning rate of 0.1, clipped gradients at 1, and no gradient noise.\nIn our experiments we evaluate the effect of the choices discussed in Sect.3f comparing seven model variants in total. We call our initial assembly model A and its variation with a fixed memory allocation scheme A+F. All other models use the same fixed memory allocation scheme. The extension of the assembly model with a built-in foreach loop is called A+L. The A+L model also allows a foreachZip loop structure that allows parallel iteration over two lists, similar to the zipwith combinator. The model including predefined combinators is called C, where C+I (resp. C+T) are its extensions with immutable registers (resp. typed registers). Finally, C+T+I combines all of these and is, in effect, a simple end-to-end differentiable functional programming language."}, {"section_index": "7", "section_name": "4.1 STRAIGHT-LINE PROGRAMS", "section_text": "In our first experiment, we consider two families of simple problems-solvable with straight-line. programs---to study the interaction of our modeling choices with program length. Our first benchmark task is to duplicate a scalar input a fixed number k times to create a list of length k. Our second benchmark is to retrieve the k-th element of a list, again fixing k beforehand (we will consider a. generalization of this task where k is a program input later). We set the hyperparameters for all. models to allow 11 statements, i.e., for A and A+F we have set the program length to 11, and for. the A+L and C * models we have set the prefix and loop length to O and the suffix length to 11. For models where the number of registers does not depend on the number of timesteps, we set the. number of registers to 3, with one initialized to the input. This allows for ~ 1039 programs in the A.. A+F, C+I, and C+T+I models and for ~ 1028 programs in the remaining models. These parameters. were chosen to be slightly larger than required by the largest program to be learned. For all of our. experiments, the maximal integer M was set to 20 for models where possible (i.e., for A, C+T+I. C+T), and to H (derived from T, coming to 22) for the others|2.\n2we also experimented with varying the value of M. Choices over 20 showed no significant differences to smaller values.\nFigure 1: Success rate of our models on straight-line p rams of increasing length.\nWe consider the ratio of successful runs as earlier work has identified this as a significant problem For example,Neelakantan et al.(2016b) reports that even after a (task-specific) \"large grid search'. of hyperparameters, the Neural Random Access Machine converged only in 5%, 7% and 22% of. random restarts. Similar observations were made in Kaiser & Sutskever (2016); Bunel et al.(2016); Gaunt et al.(2016b) for related program learning models.\nAdditionally, we show results for 2 (Feser et al.|2015), a strong program synthesis baseline from programming languages research, because of its built-in support for list-processing programs. As X2 is deterministic, we only report a success rate of either 1 or 0.\nWe evaluated all of our models following the regime discussed above and present the results in Fig.1. for k values from 1 to 9. The difference between A and A+F on the dupK task illustrates the signifi cance of Recommendation (F) to fix the memory allocation scheme. Following Recommendation (T) to separate values of different types improves the results on both tasks, as the differences between. C+T+I (resp. C+T) and C+I (resp. C) illustrate."}, {"section_index": "8", "section_name": "4.2 SIMPLE LOOP PROGRAMS", "section_text": "C+T C+I c A+L X2 Program C+T+I A A+F len 100.00 75.00 100.00 43.67 0.00 0.00 15.67 100.00 rev 48.33 32.67 46.33 41.33 0.00 0.00 86.33 100.00 sum 91.67 41.00 88.33 30.67 0.00 0.00 32.67 100.00\nTable 1: Success ratios for experiments on simple loop-requiring tasks\nThe results of our evaluation are displayed in Tab.[1] starkly illustrating Recommendation (L) to use predefined loop structures. We speculate that learning explicit jump targets is extremely challenging because changes to the parameters controlling jump target instructions have outsized effects on all computed (intermediate and output) values. On the other hand, models that could choose between different list iteration primitives were able to find programs for all tasks. We again note the effect ol Recommendation (T) to separate values of different types on the success rates for the 1en and sum examples. and the effect of Recommendation (I) to avoid mutable data on results for 1en and rev"}, {"section_index": "9", "section_name": "4.3 LOOP PROGRAMS", "section_text": "In our main experiment, we consider a larger set of common list-manipulating tasks (such as checkin if all/one element of a list is greater than a bound, retrieving a list element by index, finding the inde of a value, and computing the maximum value). Descriptions of all tasks are shown in Fig.2|ii. the appendix. We do not show results for the A and A+F models, which always fail. We set the. parameters for the remaining models to M = 32 where possible (M = H = 34 for the others), th length of the prefix to 1, the length of the closure / loop body to 3 and the length of the suffix to 2 Again, these parameters are slightly larger than required by the largest program to be learned..\nProgram C+T+I C+T C+I C A+L 12 len 98.67 96.33 0.67 0.33 0.00 100.00 rev 18.00 10.33 2.67 8.33 9.67 100.00 sum 38.00 38.33 1.00 0.00 10.00 100.00 allGtK 0.00 0.00 0.00 0.33 0.00 100.00 exGtK 3.00 1.00 0.67 0.00 0.67 100.00 findLastIdx 0.33 0.00 0.00 0.00 0.00 0.00 getIdx 1.00 0.00 0.00 0.00 0.00 0.00 last2 0.00 8.00 0.00 2.00 23.00 0.00 mapAddK 100.00 98.00 100.00 95.67 0.00 100.00 mapInc 99.67 98.00 99.33 97.00 0.00 100.00 max 2.33 5.67 0.00 0.00 0.33 100.00 pairwiseSum 43.33 32.33 43.67 33.67 0.00 100.00 revMapInc 0.00 0.67 0.00 0.00 6.33 100.00\nTable 2: Success ratios for full set of tasks\nThe results for our experiments on these tasks are shown in Tab.2] Note the changed results of the. examples from Sect.4.2 as the change in model parameters has increased the size of the program. space from ~ 107 to ~ 1020. The relative results for the A+L model show the value of built-in. iteration and aggregation patterns. The choice between immutable and mutable registers is less clear. here, seemingly dampened by other influences. An inspection of the generated programs (eg. Fig. 8lin\nIn our second experiment, we compare our models on three simple list algorithms: computing the length of a list, reversing a list and summing a list. Model parameters have been set to allow 6 statements for the A and A+F models, and empty prefixes, empty suffixes, and 2 instructions in the loop for the other models. For models where the number of registers does not depend on the number of timesteps, we set the number of registers to 4, with one initialized to the input.\nthe appendix) reveals that mutability of registers can sometimes be exploited to find elegant solutions Overall, it may be effective to combine both approaches, using a small number of (mutable) \"scratch value'' registers and immutable default output registers for each statement."}, {"section_index": "10", "section_name": "5 RELATED WORK", "section_text": "Neural Networks Learning Algorithms A number of recent models aim to learn algorithms from input/output data. Many of these augment standard recurrent neural network architectures with differentiable memory and simple computation components (e.g.Graves et al.(2014); Kurach et al. (2016); Joulin & Mikolov(2015); Neelakantan et al.(2016a); Reed & de Freitas(2016);Zaremba et al.(2016);Graves et al.(2016)). The use of an RNN can be seen as fixed looping structure, and the use of fixed output registers for the modules in Neural Random Access Machines (Kurach et al. 2016) is similar to our modeling of immutable registers.\nHowever, none of these works focus on producing source code.[Gaunt et al.[(2016b) show that this i. an extremely challenging task for assembly-like program models. More recently, Bunel et al.(2016 and Riedel et al. (2016) have used program models similar to assembly (resp. Forth) source code tc initialize solutions, and either optimize or complete them.\nWe have discussed a range of modeling choices for end-to-end differentiable programming languages and made four design recommendations. Empirically, we have shown these recommendations tc significantly improve the success ratio of learning programs from input/output examples, and we expect these results to generalize to similar models attempting to learn programs.\nIn this paper, we only consider list manipulating programs, but are interested in supporting more. data structures, such as arrays (which should be a straightforward extension) and associative maps. We also only support loops over lists at this time, but are interested in extending our models to alsc. have built-in support for loops counting up to (and down from) integer values. A generalization ol. this concept would be an extension allowing the learning and use of recursive functions. Recursior is still more structured than raw goto calls, but more flexible than the combinators that we currently. employ. An efficient implementation of recursion is a challenging research problem, but it coulc. allow significantly more complex programs to be learned. Modeling recursion in an end-to-enc. differentiable language could allow us to build libraries of (learned) differentiable functions that car. be used in later synthesis problems..\nHowever, we note that with few exceptions on long straight-line code, X2 performs better than all of our considered models, and is able to synthesize programs in milliseconds. We see the future of differentiable programming languages in areas in which deterministic tools are known to perform poorly, such as the integration of perceptual data, priors and \"soft' side information such as natural language hints about the desired functionality.Gaunt et al.[(2016a) was developed in parallel to this work and builds on many of our results to learn programs that can process perceptual data (in the current example, images).\nInductive Program Synthesis There has been significant recent interest in synthesizing functional programs from input-output examples in the programming languages community. Synthesis systems generally operate by searching for a program which is correct on the examples, using types or custom deduction rules to eliminate parts of the search space. Among the notable systems: MyTH (Osera & Zdancewic[[2015] Frankle et al.|2016) synthesizes recursive functional programs from examples using types to guide the search for a correct program, X2 (Feser et al.|2015) synthesizes data structure manipulating programs structured using combinators using types and deduction rules in its search EscHER (Albarghouthi et al.]2013) synthesizes recursive programs using search and a specialized method for learning conditional expressions, and FlashFill (Gulwanil2011) structures programs as compositions of functions and uses custom deduction rules to prune candidate programs. Our decision to learn functional programs was strongly inspired by this previous work. In particular the use of combinators to structure control flow was drawn from|Feser et al.(2015). However, our end-to-end differentiable setting is fundamentally different from discrete search employed in the programming languages community, and thus concrete techniques are largely incomparable."}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URLhttp: //tensorflow. org/ Software available from tensorflow.org.\nMarcin Andrychowicz and Karol Kurach. Learning efficient algorithms with hierarchical attentive memory. arXiv preprint arXiv:1602.03218, 2016\nRudy Bunel, Alban Desmaison, Pushmeet Kohli, Philip H. S. Torr, and M. Pawan Kumar. Adaptive. neural compilation. In Proceedings of the 29th Conference on Advances in Neural Information Processing Systems (NIPS), 2016. To appear.\nEdsger W. Dijkstra. Letters to the Editor: Go to Statement Considered Harmful. 11(3):147-148 1968. 1SSN 0001-0782. doi: 10.1145/362929.362947.\nJohn K. Feser, Swarat Chaudhuri, and Isil Dillig. Synthesizing data structure transformations from input-output examples. In Proceedings of the 36th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), pp. 229-239, 2015.\nAlexander L. Gaunt, Marc Brockschmidt, Nate Kushman, and Daniel Tarlow. Lifelong perceptua programming by example. 2016a. Under submission to ICLR 2017.\nEdward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learning tc transduce with unbounded memory. In Proceedings of the 28th Conference on Advances in Neura Information Processing Systems (NIPS). pp. 1828-1836, 2015.\nSumit Gulwani. Automating string processing in spreadsheets using input-output examples. In ACM SIGPLAN Notices, volume 46, pp. 317-330. ACM, 2011.\nArmand Joulin and Tomas Mikolov. Inferring algorithmic patterns with stack-augmented recurren nets. In Proceedings of the 28th Conference on Advances in Neural Information Processing. Systems (NIPS), pp. 190-198, 2015.\nAlex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska Barwinska, Sergio Gomez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou. et al. Hybrid computing using a neural network with dynamic external memory. Nature, 2016..\nArvind Neelakantan, Quoc V. Le, and Ilya Sutskever. Neural programmer: Inducing latent pro grams with gradient descent. In Proceedings of the 4th International Conference on Learning Representations (ICLR), 2016a\nSebastian Riedel, Matko Bosnjak, and Tim Rocktaschel. Programming with a differentiable forth interpreter. CoRR. abs/1605.06640. 2016. URLhttp://arxiv.0rg/abs/1605.06640\nWojciech Zaremba, Tomas Mikolov, Armand Joulin, and Rob Fergus. Learning simple algorithms from examples. In Proceedings of the 33nd International Conference on Machine Learning. (ICML), pp. 421-429, 2016.\nName Description 1en Return the length of a list.. rev Reverse a list. Sum Sum all elements of a list.. al1GtK Check if all elements of a list are greater than k.. exGt K Check if at least one element of a list is greater k. findLastIdx Find the index of the last list element which is equal to v.. getIdx Return the kth element of a list.. 1ast2 Return the 2nd to last element of a list.. mapAddK Compute list in which k has been added to each element of the input list.. mapInc Compute list in which each element of the input list has been incremented... max Return the maximum element of a list.. pairwiseSum Compute list where each element is the sum of the corresponding elements of two input lists. revMapInc Reverse a list and increment each element..\nFigure 2: Our example tasks for loop based pr rams. \"Simple\"' tasks are above the line\nFigure 3: Semantics of foldli, mapi, zipwithi in a Python-like language\nWe show example results of our training in Figs.4|16 Note that these are the actual results produced by our system, and have only been slightly edited for typesetting. Finally, we have colored statements that a simple program analysis can identify as not contributing to the result in gray..\nnction MAPI(list, func). idx 0 ret for ele in list do. ret append(ret, func(ele, idx) idx idx + 1 return ret\nFigure 5: Solutions to exGtK in the C+T+I and A+L models\nFigure 6: A solution to findLast Idx in the C+T+I model\nr1 k r2 ro V ro r1 < foldli ro ro (X ele acc idx -- ro + ele > r1 r2 + car acc r2 < ro ^ acc r2) r2 r1 ^ro r1 r1 return r2\nFigure 4: A solution to a11GtK in the C model. Code in gray is dead\nFigure 7: A solution to get Idx in the C+T+I model\n70 U r1 k r2 r2 = r1 acc idx -) for ele in ro do. To if r2 then ele else r ro ele > r1 r2 <r2 V ro r2 <r2 V ro r1 r2 V r2 return r2.\nr1 0 r2 + nil r2 < foldli ro r1 (X ele acc idx -) ro acc r2 r1 r1 ele r2) in ro r2 + r2 ro ro+1 eotulrn\nIel70 -C 1 let r1 = 0 in let r2 = cdr ro in let r3 = foldli ro ro (X ele acc idx -- let co = idx + 1 in let c1 = if r1 then r2 else co in let c2 = Co = ele in co) in let r4 = if r2 then r3 else r3 in let r5 = r3 + r2 in return r3\nFigure 9: A solution to 1en in the C+T+I model.\nlet r3 = mapi ro ( ele idx let co = ele - 1 in let C1 = Co 1 in let c2 = r1 + ele in c2) in let r4 = r3 in let r5 = r3 in return r3\nFigure 11: A solution to mapInc in the C+T+I model.\nigure 8: Solutions to 1ast2 in the C+T and A+L models\nFigure 10: A solution to mapAddK in the C+T+I model\nFigure 13: A solution to pai rwi seSum in the C+T+I model\nFigure 15: Solutions to revMapInc in the C+T and A+L models\nFigure 12: Solutions to max in the C+T+I and A+L models\nFigure 14: Solutions to rev in the C+T+I and A+L models\n701 r10 r11 le acc idx -) for ele in ro do. ro < ele1 +1 r11 r2 < cons ro r2 r1 + cons ro T2 r1 + cons ro r2 return r2\nfor ele1 in ro do r2ele1 + r2 r1 + cons r2 ro r1 ele1 + ele2 ror1+r1 ror2+1 return r2\nFigure 16: Solutions to sum in the C+T+I and A+L models"}] |
HJGwcKclx | [{"section_index": "0", "section_name": "SOFT WEIGHT-SHARING FOR NEURAL NETWORK COMPRESSION", "section_text": "Karen Ullrich\nUniversity of Amsterdam\nThe success of deep learning in numerous application domains created the de-. sire to run and train them on mobile devices. This however, conflicts with their computationally, memory and energy intense nature, leading to a growing inter-. est in compression. Recent work by Han et al.[(2015a) propose a pipeline that. involves retraining, pruning and quantization of neural network weights, obtain-. ing state-of-the-art compression rates. In this paper, we show that competitive. compression rates can be achieved by using a version of \"soft weight-sharing. (Nowlan & Hinton1992). Our method achieves both quantization and pruning in. one simple (re-)training procedure. This point of view also exposes the relation. between compression and the minimum description length (MDL) principle.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "\"'Bigger is better' is the ruling maxim in deep learning land. Deep neural nets with billions of param. eters are no longer an exception. Networks of such size are unfortunately not practical for mobile. on-device applications which face strong limitations with respect to memory and energy consump. tion. Compressing neural networks could not only improve memory and energy consumption, bu. also lead to less network bandwidth, faster processing and better privacy. It has been shown tha. large networks are heavily over-parametrized and can be compressed by approximately two orders. of magnitude without significant loss of accuracy. Apparently, over-parametrization is beneficial fo. optimization, but not necessary for accurate prediction. This observation has opened the door for a. number of highly successful compression algorithms, which either train the network from scratcl (Hinton et al.]2015} Iandola et al.]2016][Courbariaux & Bengio]2016]Courbariaux et al.]2016) o1 apply compression post-optimization (Han et al.J2015b a| Guo et al.]2016 Chen et al.[2015|Wer et al.| 2016).\nIt has been long known that compression is directly related to (variational) Bayesian inference an. the minimum description principle (Hinton & Van Camp||1993). One can show that good compre. sion can be achieved by encoding the parameters of a model using a good prior and specifying th. parameters up to an uncertainty given, optimally, by the posterior distribution. An ingenious bits. back argument can then be used to get a refund for using these noisy weights. A number of pape. have appeared that encode the weights of a neural network with limited precision (say 8 bits pe weight), effectively cashing in on this \"bits-back\"' argument (Gupta et al.|2015 |Courbariaux et al 2014, [Venkatesh et al.]2016). Some authors go so far of arguing that even a single bit per weigl. can be used without much loss of accuracy (Courbariaux et al.|2015f|Courbariaux & Bengio]2016\nIn this work we follow a different but related direction, namely to learn the prior that we use tc encode the parameters. In Bayesian statistics this is known as empirical Bayes. To encourage. compression of the weights to K clusters, we fit a mixture of Gaussians prior model over the weights This idea originates from the nineties, known as soft weight-sharing (Nowlan & Hinton![1992) where. it was used to regularize a neural network. Here our primary goal is network compression, but as was."}, {"section_index": "2", "section_name": "2 MDL VIEW ON VARIATIONAL LEARNING", "section_text": "Model compression was first discussed in the context of information theory. The minimum de- scription length (MDL) principle identifies the best hypothesis to be the one that best compresses the data. More specifically, it minimizes the cost to describe the model (complexity cost C). and the misfit between model and data (error cost E) (Rissanen1978) 1986). It has been shown that variational learning can be reinterpreted as an MDL problem (Wallace 1990] Hin- ton & Van Camp1993} Honkela & Valpola 2004]Graves2011).In particular, given data D = {X ={xn}=1, T ={tn}N=1}, a set of parameters w =7w}-1 that describes the model and an approximation q(w) of the posterior p(w|D), the variational lower bound, also known as. negative variational free energy, L(q(w), w) can be decomposed in terms of error and complexity. loSSes\nC(q(w),w) =-Eq(w) =Eq(w)[-logp(D|w)] +KL(q(w)|p(w) CE\nwhere p(w) is the prior over w and p(D|w) is the model likelihood. According to Shannon's source coding theorem, LE lower bounds the expected amount of information needed to communicate the targets T, given the receiver knows the inputs X and the model w. The functional form of the likelihood term is conditioned by the target distribution. For example, in case of regression the predictions of the model are assumed be normally distributed around the targets T.\nN p(D|w) =p(T|X,w) = II N(tn|xn,w) n=1\nwhere N(tn, Xn, w) is a normal distribution. Another typical example is classification where the conditional distribution of targets given data is assumed to be Bernoulli distributed' These assump-. tions eventually lead to the well known error functions, namely cross-entropy error and squared error for classification and regression, respectively. cate the data we Cimilorlx1.\nBefore however we can communicate the data we first seek to communicate the model. Similarly tc LE, LC is a lower bound for transmitting the model. More specifically, if sender and receiver agree. on a prior, LC is the expected cost of communicating the parameters w. This cost is again twofold,.\nKL(q(w)[[p(w)) = Eg(w [logp(w)H(q(w)\n1(q q(w) log q(w) dw = N(w|0, oI) logV(w|0, oI) = [log(2neo2)]] S2 R\nFor more detailed discussion see Bishop(2006\nshown in Hinton & Van Camp(1993) these two objectives are almost perfectly aligned. By fitting the mixture components alongside the weights, the weights tend to concentrate very tightly around a number of cluster components, while the cluster centers optimize themselves to give the network high predictive accuracy. Compression is achieved because we only need to encode K cluster means (in full precision) in addition to the assignment of each weight to one of these J values (using log(J) bits per weight). We find that competitive compression rates can be achieved by this simple idea.\nwhere H() denotes the entropy. In|Wallace(1990) and Hinton & Van Camp(1993) it was shown that noisy encoding of the weights can be beneficial due to the bits-back argument if the uncertainty does not harm the error loss too much. The number of bits to get refunded by an uncertain weight distribution q(w) is given by its entropy. Further, it can be shown that the optimal distribution for q(w) is the Bayesian posterior distribution. While bits-back is proven to be an optimal coding scheme (Honkela & Valpola] 2004), it is often not practical in real world settings. A practical way to cash in on noisy weights (or bits-back) is to only encode a weight value up to a limited number of bits. To see this, assume a factorized variational posteriors q(w) = I q(wi). Each posterior q(wi) is associated with a Dirac distribution up to machine precision, for example, a Gaussian distribution with variance , for small values of o. This implies that we formally incur a very small refund per weight,\nNote that the more coarse the quantization of weights the more compressible the model. The bits- back scheme makes three assumptions: (i) weights are being transmitted independently, (ii) weights. are independent of each other (no mutual information), and (iii) the receiver knows the prior. Han et al.[(2015a) show that one can successfully exploit (i) and (ii) by using a form of arithmetic coding (Witten et al.|[1987). In particular, they employ range coding schemes such as the Sparse Matrix. Format (discussed in Appendix A). This is beneficial because the weight distribution has low en- tropy. Note that the cost of transmitting the prior should be negligible. Thus a factorized prior with different parameters for each factor is not desirable..\nThe main objective of this work is to find a suitable prior for optimizing the cross-entropy between a delta posterior q(w) and the prior p(w) while at the same time keeping a practical coding scheme in mind. Recall that the cross entropy is a lower bound on the average number of bits required to. encode the weights of the neural network (given infinite precision). FollowingNowlan & Hinton. (1992) we will model the prior p(w) as a mixture of Gaussians,.\nI J p(w)=II TjN(Wi|Hj,03) i=1 j=0\nWe learn the mixture parameters j, Oj, j via maximum likelihood simultaneously with the net- work weights. This is equivalent to an empirical Bayes approach in Bayesian statistics. For state of-the-art compression schemes pruning plays a major role. By enforcing an arbitrary \"zero' com ponent to have fixed o = 0 location and o to be close to 1, a desired weight pruning rate can be enforced. In this scenario o may be fixed or trainable. In the latter case a Beta distribution as hyper prior might be helpful. The approach naturally encourages quantization because in order to optimize the cross-entropy the weights will cluster tightly around the cluster means, while the cluster means themselves move to some optimal location driven by LE. The effect might even be so strong that it is beneficial to have a Gamma hyper-prior on the variances of the mixture components to prevent the components from collapsing. Furthermore, note that, mixture components merge when there is no. enough pressure from the error loss to keep them separated because weights are attracted by means and means are attracted by weights hence means also attract each other. In that way the network learns how many quantization intervals are necessary. We demonstrate that behaviour in Figure 3"}, {"section_index": "3", "section_name": "3 RELATED WORK", "section_text": "Reducing the bit size per stored weight is another approach to model compression. For example. reducing 32 bit floats to 1 bit leads to a 32 storage improvement.Gong et al.(2014) proposed and. experimented with a number of quantization approaches: binary quantization, k-means quantization. product quantization and residual quantization. Other work finds optimal fixed points (Lin et al. 2015), applies hashing (Chen et al.[2015) or minimizes the estimation error (Wu et al.]2015). Merolla et al.[(2016) demonstrates that neural networks are robust against certain amounts of low. precision; indeed several groups have exploited this and showed that decreasing the weight encoding. precision has little to no effect on the accuracy loss (Gupta et al.]2015] Courbariaux et al.]2014 Venkatesh et al.]2016). Pushing the idea of extreme quantization, (Courbariaux et al.f2015) anc. Courbariaux & Bengio(2016) trained networks from scratch that use only 1bit weights with floating. point gradients: to achieve competitive results. however. they require many more of these weights.\nThere has been a recent surge in interest in compression in the deep neural network community Denil et al.(2013) showed that by predicting parameters of neural networks there is great redundancy. in the amount of parameters being used. This suggests that pruning, originally introduced to reduce. structure in neural networks and hence improve generalization, can be applied to the problem of. compression and speed-up (LeCun et al.1989).In fact, (Han et al.] 2015bf Guo et a1.]2016 show that neural network survive severe weight pruning (up to 99%) without significant loss of. accuracy. A variational version is is proposed by Molchanov et al.[(2017), the authors learn the dropout rate for each weight in the network separately. Some parameters will effectively be pruned. when the dropout rate is very high. In an approach slightly orthogonal to weight pruning, (Wen. et al.[2016) applied structural regularization to prune entire sets of weights from the neural network. Such extreme weight pruning can lead to entire structures being obsolete, which for the case of. convolutional filters, can greatly speed up prediction. Most importantly for compression, however, is that in conjunction with Compressed Sparse Column (CsC) format, weight pruning is a highly. effective way to store and transfer weights. In Appendix Awe discuss CSC format in more detail..\nHan et al. (2015a) elaborate on combining these ideas. They introduce an multi-step algorithm that compresses CNNS up to 49. First, weights are pruned (giving 9 - 13 compression); second they quantize the weights (increasing compression to 27 - 31); and last, they apply Huffman Encoding (giving a final compression of 35 - 49). The quantization step is trainable in that after each weight. is assigned to a cluster centroid, the centroids get trained with respect to the original loss function. Note that this approach has several restrictions: the number of weights set to zero is fixed after the. pruning step, as is the assignment of a weight to a given cluster in the second step. Our approach. overcomes all these restrictions.\nA final approach to compressing information is to apply low rank matrix decomposition. Fir introduced by (Denton et al.]2014) and Jaderberg et al.[(2014), and elaborated on by using lo rank filters (Ioannou et al.2015), low rank regularization (Tai et al.2015) or combining low ran. decomposition with sparsity (Liu et al.f2015)"}, {"section_index": "4", "section_name": "4 METHOD", "section_text": "This section presents the procedure of network compression as applied in the experiment section. A summary can be found in Algorithm1\nWe retrain pre-trained neural networks with soft weight-sharing and factorized Dirac posteriors Hence we optimize\nW. logp(T|X,w) T logp(w,{j,j,j}=0\nL(w,{j,Oj,j}j=0 =CE +TLC\nvia gradient descent, specifically using Adam (Kingma & Ba]2014). The KL divergence reduces to. the prior because the entropy term does not depend on any trainable parameters. Note that, similar to (Nowlan & Hinton 1992) we weigh the log-prior contribution to the gradient by a factor of. = O.005. In the process of retraining the weights, the variances, means, and mixing proportions of all but one component are learned. For one component, we fix j=o = 0 and j=o = 0.999. Alternatively we can train =o as well but restrict it by a Beta distribution hyper-prior. Our Gaussian. MM prior is initialized with 24 + 1 = 17 components. We initialize the learning rate for the weights. and means, log-variances and log-mixing proportions separately. The weights should be trained. with approximately the same learning rate used for pre-training. The remaining learning rates are. set to 5 . 10-4. Note that this is a very sensitive parameter. The Gaussian mixtures will collapse. very fast as long as the error loss does not object. However if it collapses too fast weights might be left behind, thus it is important to set the learning rate such that the mixture does collapse too soon. If the learning rate is too small the mixture will converge too slowly. Another option to keep. the mixture components from collapsing is to apply an Inverse-Gamma hyperprior on the mixture. variances."}, {"section_index": "5", "section_name": "4.2 INITIALIZATION OF MIXTURE MODEL COMPONENTS", "section_text": "In principle, we follow the method proposed byNowlan & Hinton[(1992). We distribute the mean of the 16 non-fixed components evenly over the range of the pre-trained weights. The variances wi be initialized such that each Gaussian has significant probability mass in its region. A good orien. tation for setting a good initial variance is weight decay rate the original network has been traine on. The trainable mixing proportions are initialized evenly ; = (1 j=o)/J. We also exper. imented with other approaches such as distributing the means such that each component assume. an equal amount of probability. We did not observe any significant improvement over the simple initialization procedure."}, {"section_index": "6", "section_name": "4.3 POST-PROCESSING", "section_text": "After re-training we set each weight to the mean of the component that takes most responsibility fo. it i.e. we quantize the weights. Before quantizing, however, there might be redundant components\nas explained in section[2] To eliminate those we follow Adhikari & Hollmen[(2012) by computing the KL divergence between all components. For a KL divergence smaller than a threshold, we merge two components as follows\nTiO?+TjOJ TiHi+TjHj 2 Tnew =Ti+ TTj, Pnew 0 Ti+TTj new TTi+Tj\nAlgorithm 1 Soft weight-sharing for compression, our proposed algorithm for neural network model compression. It is divided into two main steps: network re-training and post-processing.."}, {"section_index": "7", "section_name": "5 MODELS", "section_text": "We test our compression procedure on two neural network models used in previous work we com pare against in our experiments:.\n(a) LeNet-300-100 an MNIST model described in LeCun et al.(1998). As no pre-trained model is available, we train our own, resulting in an error rate of 1.89%.. (b) LeNet-5-Caffe a modified version of the LeNet-5 MNIST model in |LeCun et al.[(1998). The model specification can be downloaded from the Caffe MNIST tutorial page[] As no pre-trained model is available, we train our own, resulting in an error rate of 0.88%.. (c) ResNets have been invented byHe et al.(2015) and further developed by He et al.(2016 andZagoruyko & Komodakis(2016). We choose a model version of the latter authors. In. accordance with their notation, we choose a network with depth 16, width k = 4 and no dropout.. This model has 2.7M parameters. In our experiments, we follow the authors by using only light. augmentation, i.e., horizontal flips and random shifts by up to 4 pixels. Furthermore the data is. normalized. The authors report error rates of 5.02% and 24.03% for CIFAR-10 and CIFAR-100 respectively. By reimplementing their model we trained models that achieve errors 6.48% and. 28.23%. 6 EXPERIMENTS\nFirst, we run our algorithm without any hyper-priors, an experiment on LeNet-300-100. In Figure 1|we visualise the original distribution over weights, the final distribution over weight and how each weight changed its position in the training process. After retraining, the distribution is sharply peaked around zero. Note that with our procedure the optimization process automatically determines how many weights per layer are pruned. Specifically in this experiment, 96% of the first layer (235K\nhttps://github.com/BvLc/caffe/blob/master/examples/mnist/lenet_train test.prototxt\nparameter), 90% of the second (30K) and only 18% of the final layer (10K) are pruned. From. observations of this and other experiments, we conclude that the amount of pruned weights depends mainly on the number of parameters in the layer rather than its position or type (convolutional or. fully connected).\nEvaluating the model reveals a compression rate of 64.2. The accuracy of the model does not drop. significantly from 0.9811 to 0.9806. However, we do observe that the mixture components even tually collapse, i.e., the variances go to zero. This makes the prior inflexible and the optimization can easily get stuck because the prior is accumulating probability mass around the mixture means. For a weight, escaping from those high probability plateaus is impossible. This motivates the use. hyper-priors such as an Inverse-Gamma prior on the variances to essentially lower bound them..\n0.6 0.4 0.2 M neu!y 0.0 -0.2 -0.4 -0.6 1.0 -0.5 0.0 0.5 1.0 Initial w\nFigure 1: On top we show the distribution of a pretrained network. On the right the same distributior after retraining. The change in value of each weight is illustrated by a scatter plot.\nThe proposed procedure offers various freedoms: there are many hyper-parameters to optimize. one may use hyper-priors as motivated in the previous section or even go as far as using other distributions as mixture components..\nTo cope with the variety of choices, we optimize 13 hyper-parameters using the Bayesian optimiza tion tool Spearmint Snoek et al.(2012). These include the learning rates of the weight and mixing components, the number of components, and t. Furthermore, we assume an Inverse-Gamma prior over the variances separately for the zero component and the other components and a Beta prior over the zero mixing components.\nIn these experiments, we optimize re-training hyperparameters for LeNet-300-100 and LeNet-5 Caffe. Due to computational restrictions, we set the number of training epochs to 40 (previously 100), knowing that this may lead to solutions that have not fully converged. Spearmint acts on ar objective that balances accuracy loss vs compression rate. The accuracy loss in this case is measurec over the training data. The results are shown in Figure[2 In the illustration we use the accuracy los. as given by the test data. The best results predicted by our spearmint objective are colored in darl blue. Note that we achieve competitive results in this experiment despite the restricted optimizatior time of 40 epochs, i.e. 18K updates.\n160 600 Han et al. (2016) O Han et al. (2016) Guo et al. (2016) Guo et al. (2016) 140 . Ours . Ours 500 120 400 100 80 300 60 . 200 40 :o. . 100 20 . 0 .. .0 0 . : 8 . 0 0 0.4 0.2 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy loss [%] Accuracy loss [%]\n1.0 1.0 0.5 0.5 eonn eonnnnnnn 0.0 0.0 0.5 -0.5 1.0 -1.0 0 20 40 60 80 100 1.0 -0.5 0.0 0.5 1.0 Epoch Initial Weights.\nFigure 3: Illustration of our mixture model compression procedure on LeNet-5-Caffe. Left: Dy namics of Gaussian mixture components during the learning procedure. Initially there are 17 com ponents, including the zero component. During learning components are absorbed into other com ponents, resulting in roughly 6 significant components. Right: A scatter plot of initial versus fi nal weights, along with the Gaussian components' uncertainties. The initial weight distribution i roughly one broad Gaussian, whereas the final weight distribution matches closely the final, learnec prior which has become very peaked, resulting in good quantization properties.\nWe compare our compression scheme with Han et al.(2015a) and[Guo et al.(2016) in Table[1] The results on MNIST networks are very promising. We achieve state-of-the-art compression rates ir. both examples. We can furthermore show results for a light version of ResNet with 2.7M parameter. to illustrate that our method does scale to modern architectures. We used more components (64\nFigure 2: We show the results of optimizing hyper-parameters with spearmint. Specifically, we plot the accuracy loss of a re-trained network against the compression rate. Each point represents one hyper-parameter setting. The guesses of the optimizer improve over time. We also present the results of other methods for comparison. Left: LeNet-300-100 Right: LeNet-5-Caffe.\nThe conclusions from this experiment are a bit unclear, on the one hand we do achieve state-of- the-art results for LeNet-5-Caffe. on the other hand there seems to be little connection between the parameter settings of best results. One wonders if a 13 dimensional parameter space can be searched efficiently with the amount of runs we were conducting. It may be more reasonable to get more inside in the optimization process and tune parameters according to those.\nTable 1: Compression Results. We compare methods based on the post-processing error (we also indicate the starting error), the accuracy loss , the number of non zero weights Wo and the final compression rate CR based on the method proposed byHan et al.(2015a).\nhere to cover the large regime of weights. However, for large networks such as VGG with 138M. parameters the algorithm is too slow to get usable results. We propose a solution for this problem in Appendix C; however, we do not have any experimental results yet.."}, {"section_index": "8", "section_name": "DISCUSSION AND FUTURE WORK", "section_text": "In this work we revived a simple and principled regularization method based on soft weight-sharir and applied it directly to the problem of model compression. On the one hand we showed that we ca optimize the MDL complexity lower bound, while on the other hand we showed that our methc works well in practice when being applied to different models. A short-coming of the method the moment is its computational cost and the ease of implementation. For the first, we provide proposal that will be tested in future work. The latter is an open question at the moment. No that our method--since it is optimizing the lower bound directly-will most likely also work whe applied to other storage formats, such as those proposed originally byHinton & Van Camp1993 In the future we would like to extend beyond Dirac posteriors as done in Graves[(2011) by extendin the weight sharing prior to more general priors. For example, from a compression point of view, w could learn to prune entire structures from the network by placing Bernoulli priors over structur such as convolutional filters or ResNet units. Furthermore, it could be interesting to train mode from scratch or in a student-teacher setting."}, {"section_index": "9", "section_name": "ACKNOWLEDGEMENTS", "section_text": "This research has been supported by Google"}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Prem Raj Adhikari and Jaakko Hollmen. Multiresolution mixture modeling using merging of mix ture components. 2012.\nChristopher M Bishop. Pattern recognition. Machine Learning, 2006\nMatthieu Courbariaux and Yoshua Bengio. Binarynet: Training deep neural networks with weight and activations constrained to +1 or -1. arXiv preprint arXiv:1602.02830, 2016\nW0 Model Method Top-1 Error[%] [%] |W|[106] [%] CR w LeNet-300-100 Han et al.. (2015a 1.64 > 1.58 0.06 0.2 8.0 40 Guo et al.. 2016) 2.28 > 1.99 -0.29 1.8 56 Ours 1.89 > 1.94 -0.05 4.3 64 LeNet-5-Caffe Han et al.. (2015a 0.80 > 0.74 -0.06 0.4 8.0 39 Guo et al.. (2016) 0.91 > 0.91 0.00 0.9 108 Ours 0.88 > 0.97 0.09 0.5 162 ResNet (light) Ours 6.48 > 8.50 2.02 2.7 6.6 45\nWenlin Chen, James T Wilson, Stephen Tyree, Kilian Q Weinberger, and Yixin Chen. Compressing convolutional neural networks. arXiv preprint arXiv:1506.04449, 2015.\nMatthieu Courbariaux, Jean-Pierre David, and Yoshua Bengio. Training deep neural networks with low precision multiplications. arXiv preprint arXiv:1412.7024, 2014\nhttps://github.com/KarenUllrich/Tutorial-SoftWeightSharingForNNCompressior\nMatthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neura networks with binary weights during propagations. In Advances in Neural Information Processing Systems, pp. 3123-3131, 2015.\nEmily L Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. Exploiting lineai. structure within convolutional networks for efficient evaluation. In Advances in Neural Informa tion Processing Systems, pp. 1269-1277, 2014.\nYunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional net works using vector quantization. arXiv preprint arXiv:1412.6115, 2014.\nYiwen Guo, Anbang Yao, and Yurong Chen. Dynamic network surgery for efficient dnns. In Advances In Neural Information Processing Systems, pp. 1379-1387, 2016.\nSuyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep learning with limited numerical precision. CoRR, abs/1502.02551, 392, 2015.\nSong Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. CoRR, abs/1510.00149, 2, 2015a\nMax Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Speeding up convolutional neural networks with low rank expansions. arXiv preprint arXiv:1405.3866, 2014.\nMisha Denil, Babak Shakibi, Laurent Dinh, Nando de Freitas, et al. Predicting parameters in deep. learning. In Advances in Neural Information Processing Systems, pp. 2148-2156, 2013\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027, 2016.\nYann LeCun, John S Denker, Sara A Solla, Richard E Howard, and Lawrence D Jackel. Optim brain damage. In NIPs, volume 2, pp. 598-605, 1989\nPaul Merolla, Rathinakumar Appuswamy, John Arthur, Steve K Esser, and Dharmendra Modha. Deep neural networks are robust to weight binarization and other non-linear distortions. arXiv preprint arXiv:1606.01981, 2016.\nSteven J Nowlan and Geoffrey E Hinton. Simplifying neural networks by soft weight-sharing Neural computation, 4(4):473-493, 1992\norma Rissanen. Modeling by shortest data description. Automatica, 14(5):465-471, 1978\nJorma Rissanen. Stochastic complexity and modeling. The annals of statistics, pp. 1080-1100 1986.\nHerbert Robbins and Sutton Monro. A stochastic approximation method. The annals of mathemat cal statistics, pp. 400-407, 1951.\nJasper Snoek, Hugo Larochelle, and Ryan P Adams. Practical bayesian optimization of machine learning algorithms. In Advances in neural information processing systems. pp. 2951-2959. 2012\nGanesh Venkatesh, Eriko Nurvitadhi, and Debbie Marr. Accelerating deep convolutional network. using low-precision and sparsity. arXiv preprint arXiv:1610.00324, 2016.\nIan H Witten, Radford M Neal, and John G Cleary. Arithmetic coding for data compression. Com munications of the ACM, 30(6):520-540, 1987.\nJiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, and Jian Cheng. Quantized convolutional neural networks for mobile devices. arXiv preprint arXiv:1512.06473, 2015"}, {"section_index": "11", "section_name": "REVIEW OF STATE-OF-THE-ART NEURAL NETWORK COMPRESSION", "section_text": "We apply the compression scheme proposed byHan et al.(2015b a) that highly optimizes the storage utilized by the weights. First of all, the authors store the weights in regular compressed sparse-row (CSR) format. Instead of storing |W(l)| parameters with a bit length of (commonly) Porig = 32 bit, CSR format stores three vectors (A, IR, IC).\nAn example shall illustrate the format. le1\n/0 0 0 1\\ 0 2 0 0 A 0 0 0 0 2 5 0 0 0 0 0 1\nW 0 0 0 0 2 5 0 0 0 0 7 A=[1,2,2,5,1] IR = [0,1,2,2,4,5] IC = [3.1.0.1.3\nThe compression rate achieved by applying the CSC format naively is\nHowever, this result can be significantly improved by optimizing each of the three arrays\nTo optimize IR, note that the biggest number in IR is |W(l)|o. This number will be much smaller than 2Poris. Thus one could try to find p E Z+ such that |W(l)|o < 2Pprun. A codebook would not be necessary. Thus instead of storing (K + 1) values with porig, we store them with pprun depth."}, {"section_index": "12", "section_name": "A.3 STORING THE WEIGHT ARRAY A", "section_text": "In order to minimize the storage occupied by A. We quantize the values of A. Storing indexe. in A and a consecutive codebook. Indexing can be improved further by again applying Huffmar encoding.\n. A stores all non-zero entries. It is thus of size |W(l)|o Porig, where |W(l)|o is the number of non-zero entries in W(l) . IR Is defined recursively: IRo = 0, IR =IRg-1+ (number of non-zero entries in the (k - 1)-th row of w(l)). It got K + 1 entries each of size Porig. : IC contains the column index in W(l) of each element of A. The size is hence, |W(l)|o Porig:\nA=[1,2,2,5,1 IR = [0,1, 2,2, 4, 5 IC = [3,1, 0,1, 3\nw(l) 2|W(l)|o+ (K +1)\nInstead of storing the indexes, we store the differences between indexes. Thus there is a smaller range of values being used. We further shrink the range of utilized values by filling A with zeros. whenever the distance between two non-zero weights extends the span of 2prun.Han et al. (2015a Prur propose p = 5 for fully connected layers and p = 8 for convolutional layers. An illustration of the. process can is shown in Fig. 4] Furthermore, the indexes will be compressed Hoffman encoding..\nSpan Exceeds 8=2^3 idx 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 diffd 1 3 8 3 value 3.4 0.9 0 1.7 Filler Zero\nFigure 4: Illustration of the process described in |A.2 IC is represented by relative indexes(diff). If the a relative index is larger than 8(= 23), A will be filled with an additional zero. Figure from[Han et al.(2015a).\nThe Gamma distribution is the conjugate prior for the precision of a univariate Gaussian distribution It is defined for positive random variables X > 0..\nr([a,)\n0.14 a=6,=0.05 0.12 a=11,=0.1 0.10 a =101,=1 a =1001,=10 Prgeqoad 0.08 0.06 0.04 0.02 0.00 0 50 100 150 200 250\nFigure 5: Gamma distribution with X* = 100. and correspond to different choices for th variance of the distribution.\nour experiments we set the desired variance of the mixture components to 0.05. This corresponds to X* = 1/(0.05)2 = 400. We show the effect of different choices for the variance of the Gamma. distribution in Figure5\na=11,=0.1 a =101,=1 a =1001,= 10"}, {"section_index": "13", "section_name": "B.2 BETA DISTRIBUTION", "section_text": "The Beta distribution is the conjugate prior for the Bernoulli distribution, thus is often used tc represent the probability for a binary event. It is defined for some random variable ;=o E [0, 1\nT(a+ ) B(=0[a, ) T()r()\nwith a, > 0. a and can be interpreted as the effective number of observations prior to an experiment, of j=o = 1 and =o = 0, respectively. In the literature, + is defined as the pseudo-count. The higher the pseudo-count the stronger the prior. In Figure [6] we show the Beta\n90 a=8.92,=1.08 80 a =48.52,=1.48 70 a=98.02,=1.98 60 a =494.02,=5.98 50 40 30 20 10 0 0.80 0.85 0.90 0.95 1.00 Tj=0\nFigure 6: Beta distribution with *=o = 0.9. and correspond to different choices for the pseudo- count.\nNeural Networks are usually trained with a form of batch gradient decent (GD) algorithm. These methods fall into the umbrella of stochastic optimization (Robbins & Monrol1951). Here the model parameters W are updated iteratively. At each iteration t, a set of B data instances is used to compute. a noisy approximation of the posterior derivative with respect to W given all data instances N..\nB N Vw logp(W|D) Vwlogp(tn|xn,w) +>`Vwlogp(wi B n=1 i=1\nThis gradient approximation can subsequently be used in various update schemes such as simple GD.\nFor large models estimating the prior gradient can be an expensive operation. This is why we propose to apply similar measures for the gradient estimation of the prior as we did for the likelihood term. To do so, we sample K weights randomly. The noisy approximation of the posterior derivative is\nB K N I Vw logp(W|D) K B n=1 i=1\nIn FigureD|we show the pre-trained and compressed filters for the first and second layers of LeNet. 5-Caffe. For some of the feature maps from layer 2 seem to be redundant hence the almost empty. columns. In FigureD|we show the pre-trained and compressed filters for the first and second layers of LeNet-300-100.\nFigure 7: Convolution filters from LeNet-5-Caffe. Left: Pre-trained filters. Right: Compresse filters. The top filters are the 20 first layer convolution weights; the bottom filters are the 20 by 5 convolution weights of the second layer.\n-\nFigure 8: Feature filters for LeNet-300-100. Left: Pre-trained filters. Right: Compressed filters\n. - 7. 1 : :. T f .. : . 1 ...."}] |
HJPmdP9le | [{"section_index": "0", "section_name": "EFFICIENT SUMMARIZATION WITH READ-AGAIN AND COPY MECHANISM", "section_text": "Wenyuan Zeng', Wenjie Luo', Sanja Fidler', Raquel Urtasun"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Encoder-decoder models have been widely used in sequence to sequence tasks such as machine translation (Cho et al.[(2014); Sutskever et al.[(2014)). They consist of an encoder which represents the whole input sequence with a single feature vector. The decoder then takes this representation and generates the desired output sequence. The most successful models are LSTM and GRU as they are much easier to train than vanilla RNNs.\nIn this paper we are interested in summarization where the input sequence is a sentence/paragraph. and the output is a summary of the text. Several encoding-decoding approaches have been proposed (Rush et al.(2015); Hu et al.(2015); Chopra et al.(2016)). Despite their success, it is commonly believed that the intermediate feature vectors are limited as they are created by only looking at previ. ous words. This is particularly detrimental when dealing with large input sequences. Bi-directoria. RNNs (Schuster & Paliwal(1997); Bahdanau et al.(2014)) try to address this problem by computing two different representations resulting of reading the input sequence left-to-right and right-to-left. The final vectors are computed by concatenating the two representations. However, the word repre. sentations are computed with limited scope.\nThe decoder employed in all these methods outputs at each time step a distribution over a fixed. vocabulary. In practice, this introduces problems with rare words (e.g., proper nouns) which are. out of vocabulary. To alleviate this problem, one could potentially increase the size of the decoder vocabulary, but decoding becomes computationally much harder, as one has to compute the soft-max over all possible words.Gulcehre et al.(2016), Nallapati et al.(2016) and Gu et al.(2016) proposed to use a copy mechanism that dynamically copy the words from the input sequence while decoding. However, they lack the ability to extract proper embeddings of out-of-vocabulary words from the. input context.Bahdanau et al.[(2014) proposed to use an attention mechanism to emphasize specific parts of the input sentence when generating each word. However the encoder problem still remains. in this approach.\nIn this work, we propose two simple mechanisms to deal with both encoder and decoder problems We borrowed intuition from human readers which read the text multiple times before generating summaries. We thus propose a Read-Again' model that first reads the input sequence before com- mitting to a representation of each word. The first read representation then biases the second read"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Encoder-decoder models have been widely used to solve sequence to sequence prediction tasks. However current approaches suffer from two shortcomings First, the encoders compute a representation of each word taking into account only the history of the words it has read so far, yielding suboptimal representa tions. Second, current models utilize large vocabularies in order to minimize the problem of unknown words, resulting in slow decoding times and large storage costs. In this paper we address both shortcomings. Towards this goal, we first in troduce a simple mechanism that first reads the input sequence before committing to a representation of each word. Furthermore, we propose a simple copy mecha- nism that is able to exploit very small vocabularies and handle out-of-vocabulary words. We demonstrate the effectiveness of our approach on the Gigaword dataset and DUC competition outperforming the state-of-the-art.\nrepresentation and thus allows the intermediate hidden vectors to capture the meaning appropriate. for the input text. We show that this idea can be applied to both LSTM and GRU models. Our second. contribution is a copy mechanism which allows us to use much smaller vocabulary sizes resulting. in much faster decoding and much smaller storage space. Our copy mechanism also allows us to. construct a better representation of out-of-vocabulary words. We demonstrate the effectiveness of our approach in the challenging Gigaword dataset and DUC competition showing state-of-the-art. performance."}, {"section_index": "3", "section_name": "2.1 SUMMARIZATION", "section_text": "Abstractive summarization, on the contrary, aims at generating consistent summaries based on un- derstanding the input text. Although there has been much less work on abstractive methods, they can in principle produce much richer summaries. Abstractive summarization is standardized by the DUC2003 and DUC2004 competitions (Over et al.(2007)). Some of the prominent approaches on this task includesBanko et al.(2000), Zajic et al.[(2004), Cohn & Lapata(2008) and Woodsend et al.(2010). Among them, the TOPIARY system (Zajic et al.[(2004)) performs the best in the competitions amongst non neural net based methods.\nVery recently, the success of deep neural networks in many natural language processing tasks (Col-. lobert et al.(2011)) has inspired new work in abstractive summarization .Rush et al.(2015) propose a neural attention model with a convolutional encoder to solve this task. Hu et al.[(2015) build a large dataset for Chinese text summarization and propose to feed all hidden states from the encoder. into the decoder. More recently, Chopra et al.(2016) extended Rush et al.(2015)'s work with an. RNN decoder, and Nallapati et al.(2016) proposed an RNN encoder-decoder architecture for sum- marization. Both techniques are currently the state-of-the-art on the DUC competition. However,. the encoders exploited in these methods lack the ability to encode each word condition on the whole. text, as an RNN encodes a word into a hidden vector by taking into account only the words up to. that time step.\nIn contrast, in this work we propose a 'Read-Again' encoder-decoder architecture, which enables the encoder to understand each input word after reading the whole sentence. Our encoder first reads. the text, and the results from the first read help represent the text in the second pass over the source text. Our second contribution is a simple copy mechanism that allows us to significantly reduce the. decoder vocabulary size resulting in much faster inference times. Furthermore our copy mechanism. allows us to handle out-of-vocabulary words in a principled manner. Finally our experiments show state-of-the-art performance on the DUC competition."}, {"section_index": "4", "section_name": "2.2 NEURAL MACHINE TRANSLATION", "section_text": "Dealing with Out-Of-Vocabulary words (OOVs) is an important issue in sequence to sequence ap proaches as we cannot enumerate all possible words and learn their embeddings since they might. not be part of our training set.Luong et al.[(2014) address this issue by annotating words on the. source, and aligning OOVs in the target with those source words. Recently,Vinyals et al.(2015.\nFigure 1: Read-Again Summarization Model\npropose Pointer Networks, which calculate a probability distribution over the input sequence instead of predicting a token from a pre-defined dictionary. Cheng & Lapata[(2016) develop a neural-based extractive summarization model, which predicts the targets from the input sequences. Gulcehre et al (2016); Nallapati et al.(2016) use explicit gating to decide adaptively wether to generate a target word from the fixed-size dictionary or from the input sequence.Gu et al.(2016) use a implici implicit gating operation instead of the explicit gating. This is similar to our decoder. However our decoder can also extract different OOVs' embedding accordingly from the input text instead of using a single <UNK> embedding to represent al1 OOVs. This further enhances the model's ability to handle OOVs.\nText summarization can be formulated as a sequence to sequence prediction task, where the input i a longer text and the output is a summary of that text. In this paper we develop an encoder-decode approach to summarization. The encoder is used to represent the input text with a set of continuous. vectors, and the decoder is used to generate a summary word by word..\nWe first review the typical encoder used in machine translation (e.g., Sutskever et al.(2014); Bah- danau et al.(2014)). Let x = {x1, x2, : , xn} be the input sequence of words. An encoder se- quentially reads each word and creates the hidden representation h, by exploting a recurrent neural network (RNN)\nwhere x; is the word embedding of x,. The hidden vectors h = {h1, h2, ... , hn.} are then treated as the feature representations for the whole input sentence and can be used by another RNN tc decode and generate a target sentence. Although RNNs have been shown to be useful in modeling sequences, one of the major drawback is that h; depends only on past information i.e., { x1, : . . , xi} However, it is hard (even for humans) to have a proper representation of a word without reading the whole input sentence.\nFollowing this intuition, we propose our 'Read-Again' model where the encoder reads the input sentence twice. In particular, the first read is used to bias the second more attentive read. We apply\nYt Decoder St Y1Y2 eos Decoder Word Embedding Decoder Word 9t- re-weight attention Copied Word Ct Decoder Embedding Copied Word Encoder 1 Encoder 2 1st read 2nd read copy X1x2,x3,Xn X1x2X3 >P2 Enc2 Enc2 Enc2 (a) Overall Model (b) Decoder\nSt.- Decoder St Y1y2 eos> Decoder Word Embedding Decoder Word Yt-1 Copied Word Ct re-weight attention Decoder Embedding Copied Word Encoder Encoder 2 1st read 2nd read copy X1,X2,X3,Xn X1X2X3 Enc2 Enc2 Enc2 Overall Model h\nIn the following, we first introduce our 'Read-Again' model for encoding sentences. The idea be- hind our approach is very intuitive and is inspired by how humans do this task. When we create summaries, we first read the text and then we do a second read where we pay special attention to the words that are relevant to generate the summary. Our Read-Again' model implements this idea by reading the input text twice and using the information acquired from the first read to bias the second read. This idea can be seamlessly plugged into LSTM and GRU models. Our second contribution is a copy mechanism used in the decoder. It allows us to reduce the decoder vocabulary size dramat- ically and can be used to extract a better embedding for OOVs. Fig.1(a)gives an overview of our model.\nh;= RNN(xi,hi-1)\nDecoder Decoder Decoder Decoder Decoder Decode 4- : 1- 1- 1-n han GRU2 >h? GRU2 C D GRU2 >h? h?nit' LSTM2 +h? +LSTM2 LSTM2 1 x1 X2 xn X1 x2 xn re-weigh e-we re-weig 1 2 GRU1 hi GRU1 GRU hn LSTM1+h] LSTM1 LSTM1+h 1 4 4 X1 x2 xn x1 x2 xn (a) GRU Read-Again Encoder (b) LSTM Read-Again Encoder\nthis idea to two popular RNN architectures, i.e. GRU and LSTM, resulting in better encodings of the. input text. Note that although other alternatives, such as bidirectional RNN exist, the hidden states. from the forward RNN lack direct interactions with the backward RNN. and thus forward/backwar hidden states still cannot utilize the whole sequence. Besides, although we only use our model in a. uni-directional manner, it can also be easily adapted to the bidirectional case. We now describe the . two variants of our model..\nh =GRU(xi,h-1)\nwhere the function GRU1 is defined as.\nIt consists of two gatings zi,Ti, controlling whether the current hidden state h, should be directly copied from h,-1 or should pass through a more complex path h1.\nGiven the sentence feature vector h, we then compute an importance weight vector a; of each. word for the second reading. We put the importance weight a; on the skip-connections as shown. in Fig.2(a)[to bias the two information flows: If the current word x has a very small weight Qi,. then the second read hidden state h? will mostly take the information directly from the previous state h?-1, ignoring the influence of the current word. If a, is close to 1 then it will be similar to. a standard GRU, which is only influenced from the current word. Thus the second reading has the following update rule\nh= (1-Qi) O h?_1+ Q; O GRU2(xi,h?_1)\nwhere We, Ue, Ve are learnable parameters. Note that a; is a vector representing the importance. of each dimension in the word embedding. Empirically, we find that using a vector is better than a. scalar gating. We hypothesize that this is because different dimensions represent different semantic. meanings, and a scalar gating mechanism lacks the ability to capture the variances among thes dimensions.\nZi =0(Wz[xi,h,_1] ri=o(Wr[xi,h_1] hf =tanh(Wn[xi,ri O h}-1]) h=1-zi) Oh,-1+zO h\nQ; = tanh(Weh, + Uehn + Vexi),\nDecoder Decoder Decoder Decoder Decoder Decoder 4 A A 4 A ....... ....... ...... ....... ... +h2 +hm RNN2 >h? RNN2 h?- RNN2 h/2 RNN2 h RNN2 h2 RNN2 A4A4 A AAAA !'hglobal hglobal hglobal hgloba ihglobal hglobal X1 x2 Xn 1 2 xm global RNN1 >h RNN1 RNN1 RNN1 >h RNN1 RNN1 4 1 x1 X2 xn X1 2 A m sentence1 sentence2\nFigure 3: Hierachical Read-Agair\nGRU2(xi,h?_1)=(1-zi) O h?-1+ziO h?\nh?=(1-QO zi) Oh?-1+(a,O zi) O h?"}, {"section_index": "5", "section_name": "3.1.2 LSTM READ-AGAIN", "section_text": "We now apply the Read-Again' idea to the LSTM architecture as shown in Fig.2(b) O1 reading is performed by an LSTM1 defined as.\nfi=o(Wfxi,hi-1) ii =0(Wi|xi,hi-1) Oi = 0(W,[xi,hi-1]) C, = tanh(Wc[xi,hi-1]) C=ftOCi-1+i;O C hi = 0i Otanh(Ci)"}, {"section_index": "6", "section_name": "3.1.3 READING MULTIPLE SENTENCES", "section_text": "In this section we extend our 'Read-Again' model to the case where the input sequence has more than one sentence. Towards this goal, we propose to use a hierarchical representation, where each sentence has its own feature vector from the first reading pass. We then combine them into a single vector to bias the second reading pass. We illustrate this in the context of two input sentences. but it is easy to generalize to more sentences. Let {x1, x2, : . : , xn} and{x , xm} be the two\nThis equations shows that our 'read-again' model on GRU is equivalent to replace the GRU cell with a more general gating mechanism that also depends on the feature representation of the whole sentence computed from the first reading pass. We argue that adding this global information could help direct the information flow for the forward pass resulting in a better encoder..\nfi=0(Wf[Xi,hi-1 ii =o(Wi|xi,hi-1) Oi = (W0|xi,hi-1 C; = tanh(Wc[xi, hi-1] C;=ftOC;-1+iOC h, = 0; O tanh(Ci)\nDifferent from the GRU architecture, LSTM calculates the hidden state by applying a non-linear. activation function to the cell state C, instead of a linear combination of two paths used in the GRU. Thus for our second read, instead of using skip-connections, we make the gating functions explicitly depend on the whole sentence vector computed from the first reading pass. We argue that. this helps the encoding of the second reading LSTM2, as all gating and updating increments are. also conditioned on the whole sequence feature vector (ht, h.). Thus.\n= LSTM2([x;,h, h], h?_1)\ninput sentences. The first RNN reads these two sentences independently to get two sentence feature vectors h1. and h'1. respectively..\nThe second option we explored is shown in Fig.3 In particular, we use a non-linear transformation to get a single feature vector hglobal from both sentence feature vectors:.\nh? = RNN2([xi, h, h, hqlobal], h?_1\nNote that this is more easily scalable to more sentences. In our experiments both approaches perforn similarly."}, {"section_index": "7", "section_name": "3.2 DECODER WITH COPY MECHANISM", "section_text": "Our decoder reads the vector representations of the input text using an attention mechanism, and generates the target summary word by word. We use an LSTM as our decoder, with a fixed-size vocabulary dictionary Y and learnable word embeddings Y E R|Y| dim. At time-step t the LSTM generates a summary word yt by first computing the current hidden state st from the previous hidden state St-1, previous summary word yt-1 and current context vector Ct\nA typical way to treat OOVs is to encode them with a single shared embedding. However, different OOVs can have very different meanings, and thus using a single embedding for all OOVs will. confuse the model. This is particularly detrimental when using small vocabulary sizes. Here we address this issue by deriving the representations of OOVs from their corresponding context in the. input text. Towards this goal, we change the update rule of yt-1. In particular, if yt-1 belongs. to a word that is in our decoder vocabulary we take its representation from the word embedding. otherwise if it appears in the input sentence as x; we use.\nYt-1 = Pi = tanh(Wch? + bc)\nHere we investigate two different ways to handle multiple sentences. Our first option is to simply concatenate the two feature vectors to bias our second reading pass:\n= RNN2( X\nRNN2(x. lobal], h,_1\nIn this paper we argue that only a small number of common words are needed for generating a. summary in addition to the words that are present in the source text. We can consider this as a hybrid approach which combines extractive and abstractive summarization. This has two benefits:. first it allow us to use a very small vocabulary size, speeding up inference. Furthermore, we can. create summaries which contain OOVs if they are present in the source text..\nn t i=1\nwhere Wc and bc are learnable parameters. Since h? encodes useful context information of the source word x;, P; can be interpreted as the semantics of this word extracted from the input sentence. Furthermore, if yt-1 does not appear in the input text, nor in Y, then we represent yt-1 using the. <UNK> embedding\nGiven the current decoder's hidden state st, we can generate the target summary word yt. As shown in Fig.1(b)] at each time step during decoding, the decoder outputs a distribution over generating words from Y, as well as over copying a specific word x; from the source sentence..\nWe jointly learn our encoder and decoder by maximizing the likelihood of decoding the correct worc at each time step. We refer the reader to the experimental evaluation for more details."}, {"section_index": "8", "section_name": "4 EXPERIMENTAL EVALALUATION", "section_text": "In this section, we show results of abstractive summarization on Gigaword (Graff & Cieri(2003): Napoles et al.(2012)) and DUC2004 (Over et al.(2007)) datasets. Our model can learn a meaningfu] re-reading weight distribution for each word in the input text, putting more emphasis on important verb and nous, while ignoring common words such as prepositions. As for the decoder, we demon-. strate that our copy mechanism can successfully reduce the typical vocabulary size by a factor 5. while achieving much better performance than the state-of-the-art, and by a factor of 30 while main taining the same level of performance. In addition, we provide an analysis and examples of which. words are copied during decoding.\nDataset and Evaluation Metric: We use the Gigaword corpus to train and evaluate our models. Gigaword is a news corpus where the title is employed as a proxy for the summary of the article. We. follow the same pre-processing steps of Rush et al. (2015), which include filtering, PTB tokeniza. tion, lower-casing, replacing digit characters with #, replacing low-frequency words with UNK and. extracting the first sentence in each article. This results in a training set of 3.8M articles, a validation. set and a test set each containing 400K articles. The average sentence length is 31.3 words for the source, and 8.3 words for the summaries. Following the standard protocol we evaluate ROUGE. score on 2000 random samples from the test set. As for evaluation metric, we use full-length F1. score on Rouge-1, Rouge-2 and Rouge-L, followingChopra et al.[(2016) and Nallapati et al.(2016) since these metrics are less bias to the outputs' length than full-length recall scores..\nImplemetation Details: We implement our model in Tensorflow and conduct all experiments on a NVIDIA Titan X GPU. Our models converged after 2-3 days of training, depending on model size. Our RNN cells in all models have 1 layer, 512-dimensional hidden states, and 512-dimensional word embeddings. We use dropout rate of O.2 in all activation layers. All parameters, except the biases are initialized uniformly with a range of /3/d, where d is the dimension of the hidden state (Sussillo & Abbott(2014)). The biases are initialized to 0.1. We use plain SGD to train the model with gradient clipped at 10. We start with an initial learning rate of 2, and halve it every epoch after first 5 epochs Our max epoch for training is 10. We use a mini-batch size of 64, which is shuffled during training.\nTable 1: Different Read-Again Model. Ours denotes Read-Again models. C denotes copy mechanism. Ours-Opt-1 and Ours-Opt-2 are the models described in section 3.1.3. Size denotes the size of decoder vocab ulary in a model.\nvocabulary size as well as an attention encoder-decoder model with uni-directional GRU encoder We allow the decoder to generate variable length summaries. As shown in Table|1|our Read-Again models outperform the baselines on all ROUGE scores, when using both 15K and 69K sized vo- cabularies. We also observe that adding the copy mechanism further helps to improve performance: Even though the decoder vocabulary size of our approach with copy (15K) is much smaller than ABS (69K) and GRU (69K), it achieves a higher ROUGE score. Besides, our Multiple-Sentences model achieves the best performance.\nEvaluation on DUC2004: DUC 2004 (Over et al.(2007) is a commonly used benchmark on summarization task consisting of 500 news articles. Each article is paired with 4 different human generated reference summaries, capped at 75 characters. This dataset is evaluation-only. Similar to Rush et al.[(2015), we train our neural model on the Gigaword training set, and show the models' performances on DUC2004. Following the convention, we also use ROUGE limited-length recall as our evaluation metric, and set the capping length to 75 characters. We generate summaries with 15 words using beam-size of 10. As shown in Table[2] our method outperforms all previous methods on Rouge-1 and Rouge-L, and is comparable on Rouge-2. Furthermore, our model only uses 15k decoder vocabulary, while previous methods use 69k or 200k.\nImportance Weight Visualization: As we described in the section before, Q; is a high-dimension. vector representing the importance of each word x;. While the importance of a word is different over. each dimension, by averaging we can still look at general trends of which word is more relevant.. Fig.4 depicts sample sentences with the importance weight a; over input words. Words such as. the, a, 's, have small a;, while words such as aeronautics, resettled, impediments, which carry more information have higher values. This shows that our read-again technique indeed extracts useful. information from the first reading to help bias the second reading results..\nFigure 4: Weight Visualization. Black indicates high weight"}, {"section_index": "9", "section_name": "4.2 EVALUATION OF COPY MECHANISM", "section_text": "Table|3|shows the effect on our model of decreasing the decoder vocabulary size. We can see tha. when using the copy mechanism, we are able to reduce the decoder vocabulary size from 69K tc 2K, with only 2-3 points drop on ROUGE score. This contrasts the models that do not use the copy. mechanism. Equipped with a copy mechanism, our model is able to generate OOVs as summary.\nTable 2: Rouge-N limited-length recall on DUC2004. Size denotes the size of decoder vocabulary in a model.\nTable 3: ROUGE Evaluation for Models with Different Decoder Size and 110k Encoder Size. Ours denotes Read-Again. C denotes copy mechanism..\nTable 4: ROUGE Evaluation for Models with Different Encoder Size and 15k Decoder Size. Ours denotes Read-Again. C denotes copy mechanism.\nwords, and thus maintains its expressive ability even with a small decoder vocabulary size. We alsc observe from Table 4|that the copy mechanism help us to decrease the encoder vocabulary size as. well. The model without copy suffers from severe OOV problem when encoder size is small, since. a single shared <UNK> embedding cannot depict many different OOVs. This makes it difficult for. the encoder to understand the input text. Meanwhile, our copy model can extract an OOV's meaning. accordingly from its context in the input text, and thus it is sufficient to learn and store only the high-frequency words embeddings using our model, which in turn save the storage. We also notice. that shrinking the encoder vocabulary to 15k achieves better result. One possible reason is that long tail words can not learn efficient embeddings during training, and representing them with extracted. embedding from our model performs better.\nTable 5 shows the decoding time as a function of vocabulary size. As computing the soft-max i usually the bottleneck for decoding, reducing vocabulary size dramatically reduces the decoding. ime from 0.38 second per sentence to 0.08 second.\nTable [6|provides some examples of visualization of the copy mechanism. Note that we are able to copy key words from source sentences to improve the summary. From these examples we can see that our model is able to copy different types of rare words, such as special entities' names in case 1 and 2, rare nouns in case 3 and 4, adjectives in case 5 and 6, and even rare verbs in the last example Note that in the third example, when the copy model's decoder uses the embedding of headmaster as its first input, which is extracted from the source sentence, it generates the same following sentence as the no-copy model. This probably means that the extracted embedding of headmaster is closely related to the learned embedding of teacher\nRouge-1 Rouge-2 Rouge-L Size Ours-LSTM Ours-LSTM (C) Ours-LSTM Ours-LSTM (C) Ours-LSTM Ours-LSTM (C) 2K 14.39 24.21 6.46 11.27 13.74 23.09 5K 20.61 26.83 9.67 12.66 19.58 25.31 15K 25.30 27.37 11.76 12.64 23.74 25.69 30K 26.86 27.49 11.93 12.75 25.16 25.77 69K 27.82 27.89 12.73 12.69 26.01 26.03\nRouge-1 Rouge-2 Rouge-L Size Ours-LSTM Ours-LSTM (C) Ours-LSTM Ours-LSTM (C) Ours-LSTM Ours-LSTM (C) 5K 21.82 26.57 9.80 11.98 20.60 25.00 15K 23.84 27.79 10.69 12.54 22.50 25.96 30K 23.78 27.48 10.68 12.56 22.28 25.94 110K 25.30 27.37 11.76 12.64 23.74 25.69\nIn this paper we have proposed two simple mechanisms to alleviate the problems of current encoder-. decoder models. Our first contribution is a 'Read-Again' model which does not form a representa-. tion of the input word until the whole sentence is read. Our second contribution is a copy mechanism that can handle out-of-vocabulary words in a principled manner allowing us to reduce the decoder vocabulary size and significantly speed up inference. We have demonstrated the effectiveness of our. approach in the context of summarization and shown state-of-the-art performance. In the future, we plan to tackle summarization problems with large input text. We also plan to exploit our findings in other tasks such as machine translation.\nholdings limited for ### million australian -lrb- ### million us dollars -rrb- Golden: urgent air new zealand buys ## percent of australia 's ansett airlines No Copy: air nz to buy ## percent stake in australia 's <unk> Copy: air nz to buy ## percent stake in ansett Input: yemen 's ruling party was expected wednesday to nominate president ali abdullah saleh as its candidate. for september 's presidential election , although saleh insisted he is not bluffing about bowing out.. Golden: the #### gmt news advisory No Copy: yemen 's ruling party expected to nominate president as presidential candidate Copy: yemen 's ruling party expected to nominate saleh as presidential candidate Input: a ##l-year-old headmaster who taught children in care homes for more than ## years was jailed for # years on friday after being convicted of ## sexual assaults against his pupils. Golden: britain : headmaster jailed for ## years for paedophilia No Copy: teacher jailed for ## years for sexually abusing childre Copy: headmaster jailed for ## years for sexually abusing children Input: singapore 's rapidly ageing population poses the major challenge to fiscal policy in the ##st century , finance minister richard hu said , and warned against european-style state <unk>. Golden: ageing population to pose major fiscal challenge to singapore No Copy:finance minister warns against <unk> state Copy: s pore 's ageing population poses challenge to fiscal policy Input: angola is planning to refit its ageing soviet-era fleet of military jets in russian factories , a media report. said on tuesday. Golden: angola to refit jet fighters in russia : report No Copy: angola to <unk> soviet-era soviet-era fleet"}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointl learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.\nKyunghyun Cho, Bart Van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Hol ger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder. for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.\nSumit Chopra, Michael Auli, Alexander M Rush, and SEAS Harvard. Abstractive sentence summa- rization with attentive recurrent neural networks. arXiv preprint arXiv:1602.06023, 2016.\nTrevor Cohn and Mirella Lapata. Sentence compression beyond word deletion. In Proceedings of th 22nd International Conference on Computational Linguistics- Volume 1, pp. 137-144. Associatio for Computational Linguistics, 2008.\nRonan Collobert, Jason Weston, Leon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pave Kuksa. Natural language processing (almost) from scratch. Journal of Machine Learning Re. search, 12(Aug):2493-2537, 2011\nCarlos A Colmenares, Marina Litvak, Amin Mantrach, and Fabrizio Silvestri. Heads: Headline generation as sequence prediction using an abstract feature-rich space. 2015\nKatja Filippova and Yasemin Altun. Overcoming the lack of parallel data in sentence compression In EMNLP, pp. 1481-1491. Citeseer, 2013.\nDavid Graff and Christopher Cieri. English giga-word, 2003. Linguistic Data Consortium, Philade plhia, 2003.\nJiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. Incorporating copying mechanism ir sequence-to-sequence learning. arXiv preprint arXiv:1603.06393, 2016.\nBaotian Hu, Qingcai Chen, and Fangze Zhu. Lcsts: A large scale chinese short text summarization dataset. arXiv preprint arXiv:1506.05865, 2015.\nNal Kalchbrenner and Phil Blunsom. Recurrent continuous translation models. In EMNLP, vol ume 3, pp. 413, 2013.\nMinh-Thang Luong, Ilya Sutskever, Quoc V Le, Oriol Vinyals, and Wojciech Zaremba. Addressing the rare word problem in neural machine translation. arXiv preprint arXiv:1410.8206, 2014..\nRamesh Nallapati, Bowen Zhou, Ca glar Gulcehre, and Bing Xiang. Abstractive text summarization using sequence-to-sequence rnns and beyond. 2016\nJoel Larocca Neto, Alex A Freitas, and Celso AA Kaestner. Automatic text summarization using a machine learning approach. In Brazilian Symposium on Artificial Intelligence, pp. 205-215. Springer, 2002.\nAlexander M Rush, Sumit Chopra, and Jason Weston. A neural attention model for abstractive sentence summarization. arXiv preprint arXiv:1509.00685. 2015.\nDavid Sussillo and LF Abbott. Random walk initialization for training very deep feedforward net works. arXiv preprint arXiv:1412.6558. 2014.\nDavid Zajic, Bonnie Dorr, and Richard Schwartz. Bbn/umd at duc-2o04: Topiary. In Proceedings of the HLT-NAACL 2004 Document Understanding Workshop, Boston, pp. 112-119, 2004.\nOriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In Advances in Neural Information Processing Systems, pp. 2692-2700. 2015\nKam-Fai Wong, Mingli Wu, and Wenjie Li. Extractive summarization using supervised and semi supervised learning. In Proceedings of the 22nd International Conference on Computational Linguistics- Volume 1, pp. 985-992. Association for Computational Linguistics, 2008"}] |
SkYbF1slg | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "How to discover the unknown structures in data is a key task for machine learning. Learning goo. representations from observed data is important because a clearer description may help reveal th. underlying structures. Representation learning has drawn considerable attention in recent year (Bengio et al.]2013). One category of algorithms for unsupervised learning of representations i. based on probabilistic models (Lewicki & Sejnowski]200o) Hinton & Salakhutdinov2006f Le. et al.|2008), such as maximum likelihood (ML) estimation, maximum a posteriori (MAP) probabil. ity estimation, and related methods. Another category of algorithms is based on reconstruction erro or generative criterion (Olshausen & Field]1996f Aharon et al.[2006f Vincent et al.|2010] Maira et al.|2010f[Goodfellow et al.[2014), and the objective functions usually involve squared errors witl additional constraints. Sometimes the reconstruction error or generative criterion may also have . probabilistic interpretation (Olshausen & Field1997|Vincent et al.]2010).\nShannon's information theory is a powerful tool for description of stochastic systems and could be utilized to provide a characterization for good representations (Vincent et al.]2010). However computational difficulties associated with Shannon's mutual information (MI) (Shannon1948) have hindered its wider applications. The Monte Carlo (MC) sampling (Yarrow et al.J2012) is a conver gent method for estimating MI with arbitrary accuracy, but its computational inefficiency makes i unsuitable for difficult optimization problems especially in the cases of high-dimensional input stim uli and large population networks. Bell and Sejnowski (Bell & Sejnowskil|1995f|1997) have directly applied the infomax approach (Linskerf[1988) to independent component analysis (ICA) of data witl independent non-Gaussian components assuming additive noise, but their method requires that the number of outputs be equal to the number of inputs. The extensions of ICA to overcomplete o undercomplete bases incur increased algorithm complexity and difficulty in learning of parameter (Lewicki & Sejnowski]|2000] [Kreutz-Delgado et al.]2003] [Karklin & Simoncelli][2011)."}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Since Shannon MI is closely related to ML and MAP (Huang & Zhang) 2017). the algorithms o representation learning based on probabilistic models should be amenable to information-theoretic treatment. Representation learning based on reconstruction error could be accommodated also b information theory, because the inverse of Fisher information (FI) is the Cramer-Rao lower boun on the mean square decoding error of any unbiased decoder (Rao1945). Hence minimizing the reconstruction error potentially maximizes a lower bound on the MI 7Vincent et al.]2010).\nRelated problems arise also in neuroscience. It has long been suggested that the real nervous sys tems might approach an information-theoretic optimum for neural coding and computation (Barlow 1961; |Atick||1992f |Borst & Theunissen1999). However, in the cerebral cortex, the number of neu rons is huge, with about 10 neurons under a square millimeter of cortical surface (Carlo & Stevens 2013). It has often been computationally intractable to precisely characterize information coding and processing in large neural populations.\nTo address all these issues, we present a framework for unsupervised learning of representations in a large-scale nonlinear feedforward model based on infomax principle with realistic biological constraints such as neuron models with Poisson spikes. First we adopt an objective function based on an asymptotic formula in the large population limit for the MI between the stimuli and the neural population responses (Huang & Zhang2017). Since the objective function is usually nonconvex, choosing a good initial value is very important for its optimization. Starting from an initial value, we use a hierarchical infomax approach to quickly find a tentative global optimal solution for each layer by analytic methods. Finally, a fast convergence learning rule is used for optimizing the final objec- tive function based on the tentative optimal solution. Our algorithm is robust and can learn complete. overcomplete or undercomplete basis vectors quickly from different datasets. Experimental results showed that the convergence rate of our method was significantly faster than other existing methods, often by an order of magnitude. More importantly, the number of output units processed by our method can be very large, much larger than the number of inputs. As far as we know, no existing model can easily deal with this situation."}, {"section_index": "2", "section_name": "2 METHODS", "section_text": "a2 lnp(r|x) J(x) dxdxT a2 lnp(x) P(x) = dxdxT\na2 ln p(x) P(x) = dxdxT\nOur goal 1s to maxmize Ml I. A; R) by lnding tne optmal PDf p(rx) under some constraint. conditions, assuming that p(r|x) is characterized by a noise model and activation functions f (x; 0n) with parameters 0n for the n-th neuron (n = 1,... , N). In other words, we optimize p(r[x) by solving for the optimal parameters 0n. Unfortunately, it is intractable in most cases to solve for the optimal parameters that maximizes I(X; R). However, if p(x) and p(r[x) are twice continuously differentiable for almost every x E RK, then for large N we can use an asymptotic formula to. approximate the true value of I(X; R) with high accuracy (Huang & Zhang2017): G(x) I(X; R) ~ IG det + H (X) (1) 2T e\n1 G(x) I(X;R) ~ Ig = 1n det +H(X) 2 2e x\na2 lnp(r[x) Jx dxdxT rx\nO lnp(r|x;0k) 0ln p(r|x;0k and ak > 0 (k = 1,... , K) is the population density dx OxT rx\nminimize Qg{ak}]= (ln(det(G(x))))x, 2 K1 subject to Qk =1,Qk>0,Vk=1,,K1 k=1\nSince Qg[{ak}] is a convex function of {ak} (Huang & Zhang. 2017), we can readily find the optimal solution for small K by efficient numerical methods. For large K, however, finding an. optimal solution by numerical methods becomes intractable. In the following we will propose an. alternative approach to this problem. Instead of directly solving for the density distribution { ak }, we. optimize the parameters { Qk} and {0g} simultaneously under a hierarchical infomax framework."}, {"section_index": "3", "section_name": "2.2 HIERARCHICAL INFOMAX", "section_text": "with wn being a K-dimensional weights vector, f(yn; 0n) is a nonlinear function, 0n = (wT, 0 and 0n are the parameter vectors (n = 1, . .. , N).\nIn general, it is very difficult to find the optimal parameters, On, n = 1, . .: , N, for the following. reasons. First, the number of output neurons N is very large, usually N > K. Second, the activation function f(x; 0n) is a nonlinear function, which usually leads to a nonconvex optimization problem. For nonconvex optimization problems, the selection of initial values often has a great influence on the final optimization results. Our approach meets these challenges by making better use of the large. number of neurons and by finding good initial values by a hierarchical infomax method..\nWe divide the nonlinear transformation into two stages, mapping first from x to yn (n = 1, . : : , N). and then from yn to f(yn; 0n), where yn can be regarded as the membrane potential of the n-th neuron, and f(yn; On) as its firing rate. As with the real neurons, we assume that the membrane potential is corrupted by noise:\nTherefore, given the activation function f(x; 0g), our goal becomes to find the optimal popula tion distribution density Qk of parameter vector 0g so that the MI between the stimulus x and the response r is maximized. By Eq. (1), our optimization problem can be stated as follows:\nFor clarity, we consider neuron model with Poisson spikes although our method is easily applicable to other noise models. The activation function f(x; 0n) is generally a nonlinear function, such as sigmoid and rectified linear unit (ReLU) (Nair & Hinton2010). We assume that the nonlinear function for the n-th neuron has the following form: f(x; 0n) = f(yn; 0n), where.\nYn = X\nYo = Yn + Zn,\nNk 1 Yk = Nk n=1 Zk ~ N(0, I\nw'x ky_f(Y)R X Y Y+Z y 1/Nk y1 X1 m Vm rmy Xk ymN m m XK YK\nFigure 1: A neural network interpretaton for random variables X, Y, Y,Y, R\nI(X;R) =I(Y;R) I(Y;R) I(Y;R) I(X;R) I(X;Y)=I(X;Y) I(X;Y)\nI(Y; R) ~ I(Y; R) ~ I(Y; R) = I(X; R) I(X;Y) ~I(X;Y) =I(X;Y)\nA major advantage of incorporating membrane noise is that it facilitates finding the optimal solutior by using the infomax principle. Moreover, the optimal solution obtained this way is more robust that is, it discourages overfitting and has a strong ability to resist distortion. With vanishing noise 2 > 0, we have Yk -> Yk, f(yk; 0k) f(yk; 0) = f(x; 0k), so that Eqs. (13) and (14) hold as in the case of large Nk.\nTo optimize MI I(Y; R), the probability distribution of random variable Y, p(y), needs to be de-. termined, i.e. maximizing I(Y; R) about p(y) under some constraints should yield an optimal distribution: p*(y) = arg maxp(y) I(Y; R). Let C = maxp(y) I (Y; R) be the channel capacity of neural population coding, and we always have I(X; R) C (Huang & Zhang2017). To find a suitable linear transformation from X to Y that is compatible with this distribution p*(y), a reason- able choice is to maximize I(X; Y) (< I(X; Y)), where Y is a noise-corrupted version of Y. This. implies minimum information loss in the first transformation step. However, there may exist many. transformations from X to Y that maximize I(X; Y) (see Appendix|A.3.1). Ideally, if we can find. a transformation that maximizes both I(X; Y) and I(Y; R) simultaneously, then I(X; R) reaches its maximum value: I(X; R) = maxp(y) I (Y; R) = C.\nFrom the discussion above we see that maximizing I(X; R) can be divided into two steps. namely, maximizing I(X; Y) and maximizing I(Y; R). The optimal solutions of maxI(X; Y and max I(Y; R) will provide a good initial approximation that tend to be very close to the optimal solution of max I(X; R).\nSimilarly, we can extend this method to multilayer neural population networks. For example, a two layer network with outputs R(1) and R(2) form a Markov chain, X -> R(1) -> R(1) -> R(1) ->\nR(2), where random variable R(1) is similar to Y, random variable R(1) is similar to Y, and R(1) is similar to Y in the above. Then we can show that the optimal solution of max I(X; R(2)) can be approximated by the solutions of max I(X; R(1)) and max I(R(1); R(2)), with I(R(1); R(2)) I(R(1) ; R(2)).\nMore generally, consider a highly nonlinear feedforward neural network that maps the input x tc output z, with z = F(x;0) = ht o... o h1 (x), where hi (l = 1, ... , L) is a linear or nonlinea) function (Montufar et al.||2014). We aim to find the optimal parameter 0 by maximizing I (X; Z). I is usually difficult to solve the optimization problem when there are many local extrema for F(x;0) However, if each function hi is easy to optimize, then we can use the hierarchical infomax methoc described above to get a good initial approximation to its global optimization solution, and go fron there to find the final optimal solution. This information-theoretic consideration from the neura population coding point of view may help explain why deep structure networks with unsupervisec pre-training have a powerful ability for learning representations.\nW= wi,... W K1 = aU, Q1 =..=QK=K-\n1 minimize Q [C] = det C$C 2 X\n1 ,Wk=$(yk)$'(yk)cF C$C W = (W1,: ,WK Ck,k=1,...,K\nThe optimization processes for maximizing I(X; Y) and maximizing I(Y; R) are discussed in detail in Appendix|A.3 First, by maximizing I(X; Y) (see Appendix|A.3.1|for details), we can get the optimal weight parameter wk (k = 1, . .. , K1, see Eq.7) and its population density Qx (see Eq. 6 which satisfy\nBy maximizing I (Y; R) (see Appendix|A.3.2), we further solve the the optimal parameters 0g for the nonlinear functions f(yk; 0g), k = 1, ... , K1. Finally, the objective function for our optimiza-. tion problem (Eqs.5 and 6) turns into (see Appendix|A.3.3|for details):\ndQ[C] C +xw dC\nWhen Ko = Kj (or Ko > K), the objective function Q[C] can be reduced to a simpler form.. and its gradient is also easy to compute (see Appendix |A.4.1). However, when Ko < K1, it is computationally expensive to update C by applying the gradient of Q[C] directly, since it requires matrix inversion for every x. We use another objective function Q[C] (see Eq. [A.118) which is an approximation to Q[C], but its gradient is easier to compute (see Appendix [A.4.2). The function\nUsually, for optimizing the objective in Eq.[17] the orthogonality constraint (Eq.18) is unnecessary. However, this orthogonality constraint can accelerate the convergence rate if we employ it for the initial iteration to update C (see AppendixA.5).\nWe have applied our methods to the natural images from Olshausen's image dataset (Olshausen & Field 1996) and the images of handwritten digits from MNIST dataset (LeCun et al.|1998) using Matlab 2016a on a computer with 12 Intel CPU cores (2.4 GHz). The gray level of each raw image was normalized to the range of 0 to 1. M image patches with size w w = K for training were randomly sampled from the images. We used the Poisson neuron model with a modified sigmoidal\nFirstly, we tested the case of K = Ko = K1 = 144 and randomly sampled M = 10 image patches with size 12 12 from the Olshausen's natural images, assuming that N = 106 neurons were divided. into K1 = 144 classes and e = 1 (see Eq.A.52|in Appendix). The input patches were preprocessed by the ZCA whitening filters (see Eq. A.68). To test our algorithms, we chose the batch size to be equal to the number of training samples M, although we could also choose a smaller batch size. We. updated the matrix C from a random start, and set parameters tmax = 300, v1 = 0.4, and r = 0.8 for all experiments.\nIn this case, the optimal solution C looked similar to the optimal solution of IICA (Bell & Sejnowsk 1997). We also compared with the fast ICA algorithm (FICA) (Hyvarinen1999), which is faste than IICA. We also tested the restricted Boltzmann machine (RBM) (Hinton et al.|2006) for unsupervised learning of representations, and found that it could not easily learn Gabor-like filter from Olshausen's image dataset as trained by contrastive divergence. However, an improved metho by adding a sparsity constraint on the output units, e.g., sparse RBM (SRBM) (Lee et al.]2008) o sparse autoencoder (Hinton2010), could attain Gabor-like filters from this dataset. Similar result with Gabor-like filters were also reproduced by the denoising autoencoders (Vincent et al.]2010 which method requires a careful choice of parameters, such as noise level, learning rate, and batc SiZe.\nIn order to compare our methods, i.e. Algorithm 1 (Alg.1, see Appendix [A.4.1) and Algorithm. 2 (Alg.2, see Appendix[A.4.2), with other methods, i.e. IICA, FICA and SRBM, we implemented these algorithms using the same initial weights and the same training data set (i.e. 10 image patches. preprocessed by the ZCA whitening filters). To get a good result by IICA, we must carefully select. the parameters; we set the batch size as 50, the initial learning rate as O.01, and final learning rate. as O.0oo1, with an exponential decay with the epoch of iterations. IICA tends to have a faster. convergence rate for a bigger batch size but it may become harder to escape local minima. For. FICA, we chose the nonlinearity function f(u) = log cosh(u) as contrast function, and for SRBM. we set the sparseness control constant p as 0.01 and 0.03. The number of epoches for iterations was set to 300 for all algorithms. Figure[2|shows the filters learned by our methods and other methods. Each filter in Figure2(a)|corresponds to a column vector of matrix C (see Eq. A.69), where each. vector for display is normalized by c < c/ max([c1,k, ... , Ck,k), k = 1, ... , K1. The results. in Figures|2(a)[|2(b)|and|2(c)[look very similar to one another, and slightly different from the results in Figure[2(d)[and|2(e)] There are no Gabor-like filters in Figure[2(f)] which corresponds to SRBM with p = 0.03.\nFigure [3]shows how the coefficient entropy (CFE) (see Eq. A.122) and the conditional entropy (CDE) (see Eq. A.125) varied with training time. We calculated CFE and CDE by sampling once every 10 epoches from a total of 300 epoches. These results show that our algorithms had a fast convergence rate towards stable solutions while having CFE and CDE values similar to the algorithm of IICA, which converged much more slowly. Here the values of CFE and CDE should be as small\nFigure 2: Comparison of filters obtained from 10 natural image patches of size 1212 by our methods (Alg.1 and Alg.2) and other methods. The number of output filters was Kj = 144. (a): Alg.1. (b): Alg.2. (c): IICA. (d): FICA. (e): SRBM (p = 0.01). (f): SRBM (p = 0.03).\n-150 300 Alg.1 (eng) Pnrrs s Alg.1 (oig) ennnn eonnnnnn 2 Alg.2 Alg.2 IICA -200 IICA 200 FICA 1.95 SRBM (p = 0.01) SRBM (p = 0.03) -250 100 1.9 eonnnnnl -300 O 1.85 SRBM (p = 0.01) 350 -100 SRBM (p = 0.03) SRBM (p = 0.05) 1.8 SRBM (p = 0.10) -400 -200 100 101 102 100 101 102 100 101 102 time (seconds) time (seconds) time (seconds) (a) (b) (c)\nFigure 3: Comparison of quantization effects and convergence rate by coefficient entropy (see. A.122) and conditional entropy (see[A.125) corresponding to training results (filters) shown in Fig ure 2. The coefficient entropy (panel a) and conditional entropy (panel b and c) are shown as a. function of training time on a logarithmic scale. All experiments run on the same machine using. Matlab. Here we sampled once every 10 epoches out of a total of 300 epoches. We set epoch number to = 50 for Alg.1 and Alg.2 and the start time to 1 second..\nas possible for a good representation learned from the same data set. Here we set epoch number to = 50 in our algorithms (see Alg.1 and Alg.2), and the start time was set to 1 second. This explains the step seen in Figure[3[(b) for Alg.1 and Alg.2 since the parameter was updated when epoch number t = to. FICA had a convergence rate close to our algorithms but had a big CFE, which is reflected by the quality of the filter results in Figure[2] The convergence rate and CFE for SRBM were close to IICA, but SRBM had a much bigger CDE than IICA, which implies that the information had a greater loss when passing through the system optimized by SRBM than by IICA or our methods.\n(a) (b) (c) (d) (e) (f) Figuro mnorige C1 105 15 f siz0 12x13 b\nFrom Figure[3(c) we see that the CDE (or MI I(X; R), see Eq. A.124|and|A.125) decreases (o increases) with the increase of the value of the sparseness control constant p. Note that a smalle p means sparser outputs. Hence, in this sense, increasing sparsity may result in sacrificing some. information. On the other hand, a weak sparsity constraint may lead to failure of learning Gabor like filters (see Figure2(f), and increasing sparsity has an advantage in reducing the impact o. noise in many practical cases. Similar situation also occurs in sparse coding (Olshausen & Field. 1997), which provides a class of algorithms for learning overcomplete dictionary representations o. the input signals. However, its training is time consuming due to its expensive computational cost. although many new training algorithms have emerged (e.g.Aharon et al.]2006] Elad & Aharon. 2006, Lee et al.][2006, Mairal et al.]2010). See Appendix [A.5[for additional experimental results."}, {"section_index": "4", "section_name": "4 CONCLUSIONS", "section_text": "In this paper, we have presented a framework for unsupervised learning of representations via in formation maximization for neural populations. Information theory is a powerful tool for machine learning and it also provides a benchmark of optimization principle for neural information pro cessing in nervous systems. Our framework is based on an asymptotic approximation to MI for a large-scale neural population. To optimize the infomax objective, we first use hierarchical infoma. to obtain a good approximation to the global optimal solution. Analytical solutions of the hierarchi cal infomax are further improved by a fast convergence algorithm based on gradient descent. Thi method allows us to optimize highly nonlinear neural networks via hierarchical optimization using infomax principle\nOur model naturally incorporates a considerable degree of biological realism. It allows the opti mization of a large-scale neural population with noisy spiking neurons while taking into account of multiple biological constraints, such as membrane noise, limited energy, and bounded connection weights. We employ a technique to attain a low-rank weight matrix for optimization, so as to reduce the influence of noise and discourage overfitting during training. In our model, many parameters. are optimized, including the population density of parameters, filter weight vectors, and parameters for nonlinear tuning functions. Optimizing all these model parameters could not be easily done by many other methods."}, {"section_index": "5", "section_name": "ACKNOWLEDGMENTS", "section_text": "We thank Prof. Honglak Lee for sharing Matlab code for algorithm comparison, Prof. Shan Tan for discussions and comments and Kai Liu for helping draw Figure 1. Supported by grant NIH-NIDCD R01 DC013698."}, {"section_index": "6", "section_name": "REFERENCES", "section_text": "Aharon, M., Elad, M., & Bruckstein, A. (2006). K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. Signal Processing, IEEE Transactions on, 54(11), 4311- 4322.\nFrom the viewpoint of information theory, the unsupervised pre-training for deep learning (Hinton & Salakhutdinov2006 Bengio et al.| 2007) may be reinterpreted as a process of hierarchical infomax, which might help explain why unsupervised pre-training helps deep learning (Erhan et al.]2010). In our framework, a pre-whitening step can emerge naturally by the hierarchical infomax, which might also explain why a pre-whitening step is useful for training in many learning algorithms (Coates et al.]2011}Bengio| 2012).\nOur experimental results suggest that our method for unsupervised learning of representations has obvious advantages in its training speed and robustness over the main existing methods. Our model has a nonlinear feedforward structure and is convenient for fast learning and inference. This simple and flexible framework for unsupervised learning of presentations should be readily extended to training deep structure networks. In future work, it would interesting to use our method to train deep. structure networks with either unsupervised or supervised learning.\nAmari, S. (1999). Natural gradient learning for over- and under-complete bases in ica. Neura Comput.. 11(8).1875-1883\nAtick, J. J. (1992). Could information theory provide an ecological theory of sensory processing? Network: Comp. Neural., 3(2), 213-251.\nBell, A. J. & Sejnowski, T. J. (1995). An information-maximization approach to blind separation and blind deconvolution. Neural Comput., 7(6), 1129-1159.\nBell, A. J. & Sejnowski, T. J. (1997). The \"independent components\" of natural scenes are edge filters. Vision Res., 37(23), 3327-3338\nBengio, Y., Courville, A., & Vincent, P. (2013). Representation learning: A review and new per 35(8).1798-1828\nBengio, Y., Lamblin, P., Popovici, D., Larochelle, H., et al. (2oo7). Greedy layer-wise training of deep networks. Advances in neural information processing systems, 19, 153.\nBorst, A. & Theunissen, F. E. (1999). Information theory and neural coding. Nature neuroscience 2(11), 947-957.\nCortes, C. & Vapnik, V. (1995). Support-vector networks. Machine learning, 20(3), 273-297\nElad, M. & Aharon, M. (2006). Image denoising via sparse and redundant representations over learned dictionaries. Image Processing, IEEE Transactions on, 15(12), 3736-3745.\nHinton, G., Osindero, S., & Teh, Y.-W. (2o06). A fast learning algorithm for deep belief nets. Neural computation, 18(7), 1527-1554.\nHinton, G. E. & Salakhutdinov, R. R. (2o06). Reducing the dimensionality of data with neura networks. Science, 313(5786), 504-507.\nBengio, Y. (2012). Deep learning of representations for unsupervised and transfer learning. Unsu nervised and Transfer Ieo. s in Machine Learning. 7. 19.\nHubel, D. H. & Wiesel, T. N. (1962). Receptive fields, binocular interaction and functional archi tecture in the cat's visual cortex. The Journal of physiology, 160(1), 106-154\nKonstantinides, K. & Yao, K. (1988). Statistical analysis of effective singular values in matrix rank. determination. Acoustics, Speech and Signal Processing, IEEE Transactions on, 36(5), 757-763\nKreutz-Delgado, K., Murray, J. F., Rao, B. D., Engan, K., Lee, T. S., & Sejnowski, T. J. (2003). Dictionary learning algorithms for sparse representation. Neural computation, 15(2), 349-396\nLeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to docu ment recognition. Proceedings of the IEEE, 86(11). 2278-2324.\nLee, H., Battle, A., Raina, R., & Ng, A. Y. (2o06). Efficient sparse coding algorithms. In Advances in neural information processing systems (pp. 801-808).\nLewicki, M. S. & Olshausen, B. A. (1999). Probabilistic framework for the adaptation and compar ison of image codes. JOSA A. 16(7). 1587-1601..\nLinsker, R. (1988). Self-Organization in a perceptual network. Computer, 21(3), 105-117.\nMairal. J.. Bach. F. Ponce, J., & Sapiro, G. (201O). Online learning for matrix factorization and sparse coding. The Journal of Machine Learning Research, 11, 19-60..\nMontufar, G. F., Pascanu, R., Cho, K., & Bengio, Y. (2014). On the number of linear regions of deej neural networks. In Advances in Neural Information Processing Systems (pp. 2924-2932).\nSrivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout A simple way to prevent neural networks from overfitting. The Journal of Machine Learning. Research, 15(1), 1929-1958.\nLee, H., Ekanadham, C., & Ng, A. Y. (20o8). Sparse deep belief net model for visual area v2. In Advances in neural information processing systems (pp. 873-880).\nVincent, P., Larochelle, H., Lajoie, I., Bengio, Y., & Manzagol, P.-A. (201o). Stacked denoising. autoencoders: Learning useful representations in a deep network with a local denoising criterion. The Journal of Machine Learning Research, 11, 3371-3408"}, {"section_index": "7", "section_name": "APPENDIX", "section_text": "1 G(x H(X|R) =-(lnp(x|r)>r,x In det 2 2e\nOlnp(r[x) Olnp(r[x) dx dxT rx\nIf we suppose p(r|x) is conditional independent, namely, p(r|x) = Hn=1 p(rn|x; 0n), then we have (seeHuang & Zhang2017)\nwhere p(0) is the population density function of parameter 0\nN 1 8(0-0n) N n=1\nK1 J(x) ~ N QkS(x;0k)z k=1\nwhere f(x: 0) > 0 is the activation function (mean res1 ponse) of neuron and\nJ(x) = N p(O)S(x; 0)d0, alnp(r[x;0) Olnp(r|x;0) S(x; 0) dx dxT\nand 8() denotes the Dirac delta function. It can be proved that the approximation function of MI IG[p(0)] (Eq.[1) is concave about p(0) (Huang & Zhang]2017). In Eq. (A.3), we can approximate the continuous integral by a discrete summation for numerical computation,.\nf(x;0)r (r[x;0) exp(-f(x;0)) r! 1 0f(x;0) 0f(x;0 S(x;0) f(x;0) dx dxT dg(x;0) dg(x;0) dx dxT\ng(x;0)=2f(x;0)\nSimilarly. for Gaussian noise model, we have\n1 (r-fx;0) p(r[x;0) = exp 202\nwhere > 0 denotes the standard deviation of noise\nI(X;R) I(Y;R) <I(Y;R) I(Y;R)\np(ykx) = p(yk,..., YkN =N(wtx, N- k =1.....K1\np(y[x) =p(y|x) p(y) =p(y), I(X;Y)=I(X;Y)\nOn the other hand, when Nk is large, from Eq. (10) we know that the distribution of Zk, namely N (0, N-1o2), approaches a Dirac delta function 8(z). Then by (7) and (9) we have p (r|y) p(r[y) = p(r[x) and\nr[y) I(X;R) =I(Y;R) -I(Y;R), x r,x (ry) I(Y;R) =I(Y;R) ~ I(Y; R), (r|y) r,y,y (r|y) I(Y;R) =I(Y;R) ~ I(Y; R), 1n r|y) r,y,y (xy) I(X;Y)=I(X;Y 1n ~ I(X;Y) x,y,y\nA.3 HIERARCHICAL OPTIMIZATION FOR MAXIMIZING I(X: R\nIn the following, we will discuss the optimization procedure for maximizing I(X; R) in two stages maximizing I(X; Y) and maximizing I(Y; R).\n1 (r- fx;0)) p(r[x;0) = exp 202 1 0f(x;0) 0f(x;0) S(x;0) = 62 dx dxT\nSometimes we do not know the specific form of p(x) and only know M samples, x1, ..., XM, which are independent and identically distributed (i.i.d.) samples drawn from the distribution p(x) Then we can use the empirical average to approximate the integral in Eq. (1):\nM 1 Ig ~ ln(det(G(xm))) + H (X). 2 m=1\nI(X;R) <I(X;Y)<I(X;Y)<I(X;Y)\nIn the first stage, our goal is to maximize the MI I(X; Y) and get the optimal parameters w. (k = 1,...,K). Assume that the stimulus x has zero mean (if not, let x x - (x)) and covariance matrix x. It follows from eigendecomposition that\n1 Yx = XXT = UYUT M - X\nx = -1/2UTx, Wk = 1/2UI Wk Yk =wfx,\nBy the central limit theorem, the distribution of random variable X is closer to a normal distribu- tion than the distribution of the original random variable X. On the other hand, the PCA models assume multivariate gaussian data whereas the ICA models assume multivariate non-gaussian data. Hence by a PCA-like whitening transformation (A.24) we can use the approximation (A.31) with. the Laplace's method of asymptotic expansion, which only requires that the peak be close to its. mean while random variable X needs not be exactly Gaussian..\nWithout any constraints on the Gaussian channel of neural populations, especially the peak firing. rates, the capacity of this channel may grow indefinitely: I(X; Y) -> oo. The most common constraint on the neural populations is an energy or power constraint which can also be regarded as a signal-to-noise ratio (SNR) constraint. The SNR for the output yn of the n-th neuron is given by.\n1 1 SNRn Wn,n=1,...,N X\nN K1 1 1 SNRn QkWTWk< P, N 02 n=1 k=1\nminimize Q'qg[W] = subject to h = Tr A E<0\nIK, XI x\nG I(X;Y) ~ IG = 1n det +H(X) 2 2e K1 G ~ No-2 > QkWkWT+IK. k=1\np(x) ~ N (0,Ik), a2 ln p(x) P(x) = 1K dxdxT\nW = WA 1/2 = 1/2UTWA1/2 = [w1,..,WK, A = diag(a1,... , QK W =w1,...,WK W=w1,...,WK E = 0o\nHere E is a constant that does not affect the final optimal solution so we set E = 1. Then we obtair an optimal solution as follows:\nW = aU; A = K- IK1 a EK Yo = diag (. Uo = U(z,1:Ko) E RKKo Vo = V(z,1:Ko) E RK1xKo\nWhen K = Ko = K1, the optimal solution of W in Eq. (A.41) is a whitening-like filter. When. V = Ik, the optimal matrix W is the principal component analysis (PCA) whitening filters. In the. symmetrical case with V = U, the optimal matrix W becomes a zero component analysis (ZCA) whitening filter. If K < K1, this case leads to an overcomplete solution, whereas when Ko K K, the undercomplete solution arises. Since Ko K and Ko K, Q'g achieves its minimum. when Ko = K. However, in practice other factors may prevent it from reaching this minimum. For example, consider the average of squared weights..\nwhere Illl denotes the Frobenius norm. The value of s is extremely large when any ok becomes vanishingly small. For real neurons these weights of connection are not allowed to be too large Hence we impose a limitation on the weights: s E1, where E1 is a positive constant. This yields another constraint on the objective function,\nKo E -2_E0. 0 k Ko k=1\nX = USV\nwhere U is given in (A.23), V is a M M unitary orthogonal matrix, S is a K M diagonal matrix with non-negative real numbers on the diagonal, Sx,k = M - 1ox (k = 1, ... , K, K M), and SsT = (M _ 1)E. Let\nwhere V = [v1, ... , VK ] is an K K unitary orthogonal matrix, parameter Ko represents the size of the reduced dimension (1 < Ko < K), and its value will be determined below. Now the optimal parameters wn (n = 1, ... , N) are clustered into K classes (see Eq. A.6) and obey an uniform discrete distribution (see also Eq.[A.60|in Appendix|A.3.2)\nK1 Ko E Lak| W Tr (WAW k Ko k=1 k=1\nOn the other hand. the low-rank matrix W can filter out the noise of stimulus x. Consider the transformation Y = wTX with X = [x1,...,x] and Y = [y1, ...,ym] for M samples. It follows from the singular value decomposition (SVD) of X that.\nX = M -1U,1/2vT ~ X\nY = wTX = aV,>-1/2u?UsVT =wTX = aVM -1V,VT\nAnother advantage of a low-rank matrix W is that it can significantly reduce overfitting for learnin neural population parameters. In practice, the constraint (A.47) is equivalent to a weight-decay reg ularization term used in many other optimization problems (Cortes & Vapnik]1995) |Hinton]2010 which can reduce overfitting to the training data. To prevent the neural networks from overfitting Srivastava et al.[(2014) presented a technique to randomly drop units from the neural network dur ing training, which may in fact be regarded as an attempt to reduce the rank of the weight matri because the dropout can result in a sparser weights (lower rank matrix). This means that the updat is only concerned with keeping the more important components, which is similar to first performin a denoising process by the SVD low rank approximation.\nIn this stage, we have obtained the optimal parameter W (seeA.41). The optimal value of matri Vo can also be determined, as shown in Appendix[A.3.3\nFor this stage, our goal is to maximize the MI I(Y; R) and get the optimal parameters Ok k = 1,..., K1. Here the input is y = (y1,... ,yk)T and the output r = (r1,...,rn)T is also clustered into K1 classes. The responses of N neurons in the k-th subpopulation obey a Pois- son distribution with mean f(eTy; 0k), where e is a unit vector with 1 in the k-th element and Yk = eT y. By (A.24) and (A.26, we have\nJk\nj(y) = diag Na1 yu QK 9 K y K dqk Jk(yk) =1,...,K1, f(yk;0k),k=1,...,K1. 1k\nKo 2 k=1 Ko = arg min k > E K K! 2 k=1\nJ(y 1 I(Y;R) ~ IF = 1n det +H(Y) 2 2e y\nK1 NQk|gk(yk) ln +H(Y) 2 2e k=1 y K1 K1 K1 < 1n 1n I2 2e 2 k=1 y\nwhere the equality holds if and only if\n1 Qk =1,...,K1 K1\nOn the other hand. it follows from the Jensen's inequality that\nJ(y) Ir det 2e < ln det 2e\nWe assume that (A.63) holds, at least approximately. Hence we can let the peak of g%(yk) be at Yk ={yk) yr = 0 and(yk)y = 03, = ||we|l2. Then combining (A.57), (A.61) and (A.63) we find the optimal parameters 0p for the nonlinear functions f(yk; 0k), k = 1, . K1\nn the preceding sections we have obtained the initial optimal solutions by maximizing I ( X: Y\nFirst, we have\ny = WTx= ay,\ny =(y1,..,yK) =CTx=CTx C = VT E RKoxK1 -1/2UT x = Uox, x = Uo C = U,C = [c1,, CK]\nG(x) IX;R)=I det 2e G(x) = NW$wT +IK W = y1/2uTWA1/2 C\ndet IIK1l9%(yr)| 11 f IIK=1|9k(yk)|dy K1 det\n9k(Yk)| k=1,...,K1. f |gk(yk)|dyk\nwhere IK. is a K Ko diagonal matrix with value 1 on the diagonal and\n= diag($(yi),... , $(yk)), dgk(yk) p(yk) = a-1 dyk 9k(y1 (yk;0k) yk=a-yk=cfx,k=1,.K1.\nK K0 Ig ~ If = - ln(a)+H(X) 2 2 - det CC 2 Ko N\n1 minimize Q[C]= 1n det (C$C 2 x\nIn the following we will discuss how to get the optimal solution of C for two specific cases\nK L Qi[C] = ln($(yk)) k=1 dQ1[C] dC (Yk ....K1 WE\nwhere IK, is a K Ko diagonal matrix with value 1 on the diagonal and = 2, (A.73) = diag($(y1),... , $(yk)), (A.74) 1dgk(yk) P(yk) = a-1 (A.75) dyk gk(yk) =2Vf(yk;0k), (A.76) yk=a-yk=cfx,k=1,... K1 (A.77) Then we have. det(G(+)) =det (NK1C$C +Ij (A.78) For large N and Ko/N > 0, we have. det (G(x)) ~ det (J(x)) = det (NK-1C (A.79) (A.80) (A.81) Ko (A.82) N Hence we can state the optimization problem as: minimize Q [C] = In (det (A.83) subject to CCT = IKo: (A.84) The gradient from (A.83) is given by:. dQ[C]_ (A.85) dC where C = [C1,... , CKi], w = (w1,... , WK) )T, and Ck,k=1,..,K1. (A.86) Wk =b(yk)$'(yk)cF(C$C\ndet (G()) = det NK-1C$C\ndQ[C] dC X\ndCt t+1 t dt T dCt dQ1[Ct] dQ1[Ct] + Ct Ct dt dCt dCt\nwhere the learning rate parameter t changes with the iteration count t, t = 1, ... , tmax. Here we can use the empirical average to approximate the integral in (A.88) (see Eq. A.12). We can alsc apply stochastic gradient descent (SGD) method for online updating of Ct+1 in (A.90)\nThe orthogonality constraint (Eq.A.84) can accelerate the convergence rate. In practice, the orthog onal constraint (A.84) for objective function (A.83) is not strictly necessary in this case. We can completely discard this constraint condition and consider\nK1 ln($(yk)) ln (det (CT C)) minimize Q2 [C = - 2 k=1\ndQ2[C] dCT [C] dQ2[C Q2 Tr r dC dt dC dCT\nTherefore we can use an update rule similar to Eq.A.90|for learning C. In fact, the method can alsc be extended to the case Ko > K by using the same objective function (A.92)\nThe learning rate parameter t (see[A.90) is updated adaptively, as follows. First, calculate\nand Ct+1 by (A.90) and (A.91), then calculate the value Q1[Ct+1]. If Q1[Ct+1] < Q1[Ct], then let Ut+1 Vt, continue for the next iteration; otherwise, let vt TVt, t Vt/Kt and recalculate Ct+1 and Q1[Ct+1]. Here 0 < v1 1 and 0 < T < 1 are set as constants. After getting Ct+1 for each update, we employ a Gram-Schmidt orthonormalization process for matrix Ct+1, where. the orthonormalization process can accelerate the convergence. However, we can discard the Gram Schmidt orthonormalization process after iterative to (> 1) epochs for more accurate optimization. solution C. In this case, the objective function is given by the Eq. (A.92). We can also further. optimize parameter 6 by gradient descent..\nWhen Ko = K1, the objective function Q2 [C] in Eq. (A.92) without constraint is the same as the objective function of infomax ICA (IICA) (Bell & Sejnowski]1995) 1997), and as a consequence we should get the same optimal solution C. Hence, in this sense, the IICA may be regarded as a special case of our method. Our method has a wider range of applications and can handle more generic situations. Our model is derived by neural populations with a huge number of neurons and it is not restricted to additive noise model. Moreover, our method has a faster convergence rate during training than IICA (see Section3).\ndC -CCr dQ2[C] dt dC\nUt Pt ,tmax Kt K1 1 l|VCt(:,k)|l Kt K1 JCt(:, k)|l k=1\nIn this case, it is computationally expensive to update C by using the gradient of Q (see Eq. A.85) since it needs to compute the inverse matrix for every x. Here we provide an alternative method for learning the optimal C. First, we consider the following inequalities.\nProposition 2. The following inequations hold\ndet det 2 (ln (det (CCT)))x ln (det (C()x C7 det(C()?CT ln (det (CCT (det CC ere C e RKoxK1 K and CcT\nwhere U is a Ko Ko unitary orthogonal matrix, V = [V1, V2,... , VK] is an K1 K1 unitary. orthogonal matrix, and D is an Ko K1 rectangular diagonal matrix with Ko positive real numbers. on the diagonal. By the matrix Hadamard's inequality and Cauchy-Schwarz inequality we have\ndet(CCTCCT) det C cT CVD = det DV DD = det vT cT cVj : det Ko C.K k=1 Ko 11 k=1\nSimilarly, we get inequality (A.99\n1 ln (det (C() CT)) det C() n 2\nC = UDyT\nQ(yk)) Vk=1,...,K1\n1 det C()? det n n C T 2 + 2 X\nCopt = arg min Q[C] = arg max (det(CC Copt = arg min Q[C] = arg max (det(C()?cT))\n(det($ 1 det 2 2 X (ln (det ()))x ln(det (()x)) 7 n det 2 1 1n det 1 ln (det ()) det 2 30n004 :on Dlthat\n(ln (det (CCT))) -Q n det\nln (det (CCT))) - det\n=1,...,K1 f$(yk) dyk\nTherefore, we can use the following objective function Q[C] as a substitute for Q[C] and write the optimization problem as:\n1 minimize Q[C] = n det C() 2\nThe update rule (A.90) may also apply here and a modified algorithm similar to Algorithm 1 may be used for parameter learning.\nk=(wfx,k=1,.,K K1\nwhere qk(yk) is quantized as discrete qk(n) and is the step size\nMethods such as IICA and SRBM as well as our methods have feedforward structures in whic nformation is transferred directly through a nonlinear function, e.g., the sigmoid function. W an use the amount of transmitted information to measure the results learned by these method. Consider a neural population with N neurons, which is a stochastic system with nonlinear transfe unctions. We chose a sigmoidal transfer function and Gaussian noise with standard deviation set t as the system noise. In this case, from (1), (A.8) and (A.11), we see that the approximate MI Ig i equivalent to the case of the Poisson neuron model. It follows from (A.70)-(A.82) that\nI(X;R)=IX;R =H(X)-HX|R)~IG=H(X)-h1 (X|R NK-1C$C +Ik ~ h1 det 2\nwhere we set N = 106. A good representation should make the MI I(X; R) as big as possible Equivalently, for the same inputs, a good representation should make the conditional entropy (CDE small as nossihle\nwhere x is defined by Eq. (A.68), and wx is the corresponding optimal filter. To estimate the probability density of coefficients qk(yk) (k = 1, ... , K1) from the M training samples, we apply the kernel density estimation for qk(yk) and use a normal kernel with an adaptive optimal window width. Then we define the CFE h as\nK 1 1 HR(Yk), h. K1 k=1 Hk(Yk) =-nqk(n) log2 qk(n)\n(a) (b) (c) (d) (e) (f) Figure 4: Comparison of basis vectors obtained by our method and other methods. Panel (a)-(e)\nFigure 4: Comparison of basis vectors obtained by our method and other methods. Panel (a)-(e) correspond to panel (a)-(e) in Figure [2] where the basis vectors are given by (A.130). The basis vectors in panel (f) are learned by MBDL and given by (A.127)."}, {"section_index": "8", "section_name": "A.5.2 COMPARISON OF BASIS VECTORS", "section_text": "We compared our algorithm with an up-to-date sparse coding algorithm, the mini-batch dictionary learning (MBDL) as given in (Mairal et al.]2009] 2010) and integrated in Python library, i.e. scikit- learn. The input data was the same as the above, i.e. 10 nature image patches preprocessed by the ZCA whitening filters.\nx ~ U1/2uTBy = By B = U1/2uTB,\nx ~ By = aBC1 X B = a-1U, C= [bi,..., bK."}, {"section_index": "9", "section_name": "A.5.3 LEARNING OVERCOMPLETE BASES", "section_text": "We have also trained our model on 60,000 images of handwritten digits from MNIST dataset (LeCun et al.] 1998) and the resultant 400 typical optimal filters and bases are shown in Figure 5(c)[and Figure[5(d)] respectively. All parameters were the same as Figure|5(a)|and Figure[5(b)] K1 = 1024, tmax = 100, v1 = 0.4, T = 0.8 and e = 0.98, from which we got rank (B) = Ko = 183. From these figures we can see that the salient features of the input images are reflected in these filters and bases. We could also get the similar overcomplete filters and bases by SRBM and MBDL. However, the results depended sensitively on the choice of parameters and the training took a long time\nWe denotes the optimal dictionary learned by MBDL as B E RKK1 for which each column represents a basis vector. Now we have.\nWe have trained our model on the Olshausen's nature image patches with a highly overcomplete setup by optimizing the objective (A.118) by Alg.2 and got Gabor-like filters. The results of 400 typical filters chosen from 1024 output filters are displayed in Figure 5(a)|and corresponding base (see Eq. A.130) are shown in Figure 5(b)]Here the parameters are K1 = 1024, tmax = 100, v1 = 0.4, t = 0.8, and e = 0.98 (see|A.52), from which we got rank (B) = Ko = 82. Compared to the ICA-like results in Figure2(a)2(c)] the average size of Gabor-like filters in Figure 5(a)|is bigger, indicating that the small noise-like local structures in the images have been filtered out.\nWe have also performed additional tests on other image datasets and got similar results, confirming the speed and robustness of our learning method. Compared with other methods, e.g., IICA, FICA MBDL, SRBM or sparse autoencoders etc., our method appeared to be more efficient and robust for unsupervised learning of representations. We also found that complete and overovercomplete filters and bases learned by our methods had local Gabor-like shapes while the results by SRBM or MBDI did not have this property.\n(a) (b) / C J - - ) 7 7 4 5 C 7 C (c) (d) 1.\nFigure 5: Filters and bases obtained from Olshausen's image dataset and MNIST dataset by Al gorithm 2. (a) and (b): 400 typical filters and the corresponding bases obtained from Olshausen's. image dataset, where Ko = 82 and Kj = 1024. (c) and (d): 400 typical filters and the corresponding. bases obtained from the MNIST dataset, where Ko = 183 and Kj = 1024.\nFigure[6|shows that CFE as a function of training time for Alg.2, where Figure[6(a)|corresponds to. Figure 5(a)|5(b)|for learning nature image patches and Figure [6(b)|corresponds to Figure|5(c)5(d) for learning MNIST dataset. We set parameters tmax = 100 and t = 0.8 for all experiments and varied parameter v1 for each experiment, with v1 = 0.2, 0.4, 0.6 or 0.8. These results indicate a fast. convergence rate for training on different datasets. Generally, the convergence is insensitive to the change of parameter v1."}, {"section_index": "10", "section_name": "A.5.4 IMAGE DENOISING", "section_text": "Similar to the sparse coding method applied to image denoising (Elad & Aharon) 2006), our method (see Eq.A.130) can also be applied to image denoising, as shown by an example in Figure[7] The filters or bases were learned by using 7 7 image patches sampled from the left half of the image, and subsequently used to reconstruct the right half of the image which was distorted by Gaussian noise. A common practice for evaluating the results of image denoising is by looking at the difference between the reconstruction and the original image. If the reconstruction is perfect the difference should look like Gaussian noise. In Figure [7(c)|and 7(d)|a dictionary (100 bases) was learned by MBDL and orthogonal matching pursuit was used to estimate the sparse solution.|For our method (shown in Figure|7(b)), we first get the optimal filters parameter W, a low rank matrix (Ko < K), then from the distorted image patches xm (m = 1, .:. , M) we get filter outputs ym = WI'xm and the reconstruction Xm = Bym (parameters: e = 0.975 and Ko = K1 = 14). As can be seen from Figure[7] our method worked better than dictionary learning, although we only used 14 bases compared with 100 bases used by dictionary learning. Our method is also more efficient. We can get better optimal bases B by a generative model using our infomax approach (details not shown).\n' Python source code is available at http://scikit-learn.org/stable/_downloads/plot_image_denoising.py\n1.95 2.1 v, = 0.2 (ooae) eooe ennrees v,=0.2 (ooe) eooeenreny v,=0.4 v =0.4 2 v, = 0.6 v, = 0.6 1.9 v = 0.8 v = 0.8 1.9 1.85 1.8 1.8 1.7 1.75 1.6 100 101 102 100 101 102 time (seconds) time (seconds) (a) (b) ure 6: CFE as a function of training time for Alg.2, with v1 = 0.2, 0.4, 0.6 or 0.8. In eriments parameters were set to tmax = 100, to = 50 and t = 0.8. (a): corresponding ure |5(a)[or Figure 5(b)] (b): corresponding to Figure[5(c)|or Figure[5(d)\nImage Difference (norm: 23.48) Image Difference (norm: 14.24) (a) (b) Orthogonal Matching Pursuit Orthogonal Matching Pursuit 1 atom 2 atoms Image Difference (norm: 15.79) Image Difference (norm: 14.47) (c) (d) Hiqu d1q1\nFigure 7: Image denoising. (a): the right half of the original image is distorted by Gaussian noise and the norm of the difference between the distorted image and the original image is 23.48. (b): image denoising by our method (Algorithm 1), with 14 bases used. (c) and (d): image denoising using dictionary learning, with 100 bases used."}] |
rkpdnIqlx | [{"section_index": "0", "section_name": "THE VARIATIONAL WALKBACK ALGORITHM", "section_text": "Anirudh Goyal*, Nan Rosemary Ke, Alex Lamb, Yoshua Bengios\nA recognized obstacle to training undirected graphical models with latent vari- ables such as Boltzmann machines is that the maximum likelihood training pro- cedure requires sampling from Monte-Carlo Markov chains which may not mix well, in the inner loop of training, for each example. We first propose the idea that it is sufficient to locally carve the energy function everywhere so that its gra dient points in the \"right' direction (i.e., towards generating the data). Following on previous work on contrastive divergence, denoising autoencoders, generative stochastic networks and unsupervised learning using non-equilibrium dynamics. we propose a variational bound on the marginal log-likelihood of the data which corresponds to a new learning procedure that first walks away from data points by following the model transition operator and then trains that operator to walk back- wards for each of these steps, back towards the training example. The tightness of the variational bound relies on gradually increasing temperature as we walk away from the data, at each step providing a gradient on the parameters to maximize the probability that the transition operator returns to its previous state. Interestingly, this algorithm admits a variant where there is no explicit energy function, i.e. the parameters are used to directly define the transition operator. This also elimi- nates the explicit need for symmetric weights which previous Boltzmann machine or Hopfield net models require, and which makes these models less biologically plausible."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Although earlier research focused on generating data through Monte Carlo Markov chains (MCMCs), e.g. with various Boltzmann machines (Salakhutdinov & Hinton2009), most of the recent effort in designing deep generative models is based on single-step generation, e.g., with vari ational auto-encoders (VAEs) (Kingma & Welling]2013]|Rezende et al.2014) and generative adver sarial networks (GANs) (Goodfellow et al.|2014). However, generating a sample by going through a series of stochastic transformations that gradually improve the generated sample (or its latent rep- resentation) to make it more plausible could hold some advantages. A generative process can be seen as a mapping from simple noise variates (e.g., uniform, Gaussian) to samples from a very com plicated distribution (maybe concentrated near a low-dimensional manifold) approximating the one which we are trying to learn from. If the data distribution is complex (e.g., the corresponding man ifold is highly convoluted and non-linear), the generative process may involve a highly non-linear transformation which could be difficult to learn and optimize. Such highly non-linear transforma tions are probably best represented (and learned) by composing a large number of slightly non-lineai transformations, either with a fixed-depth deep network, or with a variable depth recurrent compu tation, which is what the repeated application of a transition operator corresponds to."}, {"section_index": "2", "section_name": "1.1 MOTIVATIONS", "section_text": "The main motivation for the paper are the following\n. The main difference between feedforward generation and recurrent generation is two fold:(1) in the recurrent case, the same parameters are used for each step of the transition"}, {"section_index": "3", "section_name": "1.2 GENERAL THEORY", "section_text": "We introduce a novel variational bound which is an alternative to and improves upon the traditional. reconstruction error as a training objective for DAEs and GSNs. Similar variational bounds have. been used for VAEs as well as for the non-equilibrium thermodynamics generative models (Sohl-. Dickstein et al.[2015). A distribution P over a chain of samples is defined, which corresponds to iteratively applying transition operators with shared parameters, starting from a pure noise initial. state. We would like this process to produce training examples. An inverting flow Q is defined. starting from a training example (the \"walk-away' trajectory), and following the transition operator of the model, i.e., estimating the posterior distribution of the generative chain produced by P, giver. that it were landing at a training example. If the model does not match the data distribution, that. chain Q will tend to walk away from the training samples, and we want to inhibit that by training P. to \"walk back\". Instead of using a completely different parametrization for the variational approxi-. mation of the posterior (the Q distribution), like in VAEs and non-equilibrium dynamics, we propose. to exploit the decomposition of P as a series of stochastic transformations in order to parametrize Q with the same parameters as P, with the step-wise estimated posterior matching the correct one. (from P) for all but the last step of the walk-away trajectory. To make the approximation in the.\noperator, and (2) by providing an interpretation of each of these steps as the application. of a transition operator, we can design training procedures which do not require back- propagating through all the steps of the unfolded computation (from the raw noise samples to the generated output). This is a potential that clearly deserves to be explored further and motivates the learning framework introduced here.. Another motivation for the Variational Walkback is the idea that we only need to carve the energy function in the right direction at each point in the space of the random variables of. interest, which may sideskip the need to actually sample from the stationary distribution of a Markov chain in order to obtain the gradients of the training objective. The intuition is. that if the model's transition operator wants to move away from the data and into an area. without data, this is a clue that the energy gradient is pointing in the wrong direction at that. place. Consider a chain of samples following the model's transition operator (or variants of it at different temperatures), starting at a data point. If the chain moves us away from data points, then we can use the previous state in the chain as a target for the operator when that. operator is applied to the next next state, i.e., we want to teach the operator to walk back. towards the data. This intuition was already exploited by Bengio et al.[(2013c) but without a firm mathematical grounding. In Variational Walkback this is rigorously justified by a. variational bound. .Yet another motivation for the particular approach presented here is that it innovates in the. rarely explored direction of parametrizing directly the generative model via a transition operator, rather than via an explicit probability function or energy function. This idea. has already been discussed in the context Generative Stochastic Networks (GSNs) (Bengio et al.]2013b), a generalization of denoising auto-encoders (DAEs) (Vincent et al.2008) which interprets the auto-encoder as estimating the gradient of an energy function (Alain & Bengio[2014) or as a transition operator (Bengio et al. 2013c). An advantage of being able to parametrize directly the generator is seen with GANs and DAEs: we directly parametrize and learn the function which will be used to perform the task of interest (e.g. generating. answers to some questions). Instead, the traditional approach is to parametrize a probability function or energy function (e.g., with a Boltzmann machine) and then then use another. procedure (the MCMC method of your choice) to sample from it and do inference. Another. important reason for exploring algorithms for directly learning a transition operator is that. they put less constraint on the form of the transition operator, compared with a transition. operator derived from an energy function. More specifically, neural net implementations of transition operators derived from an MCMC typically require the presence of symmetric weights (due to the symmetry of the second derivative of the energy with respect to a pair of units in the neural network), as discussed by Bengio et al.[(2015). When we consider a biologically plausible implementation of these learning algorithms, the weight symmetry. constraint (Wi; = Wji) is not reasonable as a hard constraint. Instead, if the transition. operator (rather than the energy function) is the object being parametrized and learned,. then there is no such hard constraint..\nlast step of the chain of walk-away steps better (and thus the variational bound tighter) we introduc. the idea of gradually increasing temperature at each step of the walk-away Q chain of transition. (or gradually reducing temperature, at each step of the corresponding walkback trajectory under P). This also has the advantage that the training procedure will more easily converge to and eliminate. spurious modes (those modes of the model where there is no nearby training data). This is becaus. the walk-away Q chain will be making large steps towards the dominant and most attractive mode when the temperature becomes large enough. Unless those modes are near data points, the walkbacl. algorithm will thus \"seek and destroy'' these modes, these spurious modes.\nWe present a series of experimental results on several datasets illustrating the soundness of the proposed approach on the MNIST, CIFAR-10 and CelebA datasets.\nLet v denote the vector of visible units and h denote the vector of hidden random variables, witl the full state of the model being s = (v, h). Let pe denote the model distribution, with joint energy function Ee and parameter vector 0:\nLet pp be the training distribution, from which a sample D is typically drawn to obtain the training set. The maximum likelihood parameter gradient is\nwhich is zero when training has converged, with expected energy gradients in the positive phase (under pp(v)pe(h[v)) matching those under the negative phase (under pe(s)). Note that in the (common) case of a log-linear model, the energy gradient (with respect to parameters) corresponds to the sufficient statistics of the model. Training thus consists in matching the shape of two distribu- tions, as captured by the sufficient statistics: the positive phase distribution (influenced by the data via the visible) and the negative phase distribution (where the model is free-running and generating configurations by itself).\nThe basic idea of the proposed mixing-free training framework for undirected graphical models is. the following. Instead of trying to match the whole positive phase and negative phase distributions (each of which require a difficult sampling operation, generally with an MCMC that may take very. long time to mix between well separated modes), we propose to only match the shape of the energy. function locally, around well-chosen points st. Another way to think about this is that instead of trying to directly maximize the likelihood of pe which requires expensive inference (ideally an MCMC) in the inner loop of training (for each example v ~ pD), we would like to learn a transition. operator pT(St+1[st) such that following it at temperature T = 1 would gradually move the state St. towards the data generating distribution.\nFor this purpose, we propose to use a walkback strategy similar to the one introduced byBengio. et al.(2013c), illustrated in Algorithm [1] The idea is to start from a configuration of s which. is compatible with the observed data x, let the state evolve according to our transition operator.. and then punish it for these moves, making it more likely to make backwards transitions on this trajectory. If learning was completed, the only moves that would remain are those between highly.. probable configurations under the data generating distribution. The other ones would be \"punished\".\npe := Ze\nd logpe(v) OEe(v,h) dEe(s) Ey~pD Ey~pp,h~pe(h|u) Es de ~pe(s de de\nlike a child walking away from its designated task and forced to walk back (towards the data)' Following the model's inclination in order to generate this random trajectory is more efficient than simply adding noise (like in the denoising auto-encoder (Vincent et al.J|2008) or the non-equilibrium dynamics (Sohl-Dickstein et al.2015) algorithms) because it makes the learning procedure focus its computation on state configurations corresponding to spurious modes to be eliminated. To make sure these spurious modes are approached efficiently, the proposed algorithm also includes the idea of gradually increasing temperature (i.e., the amount of noise) along this walk-away trajectory. At high temperature, the transition operator mixes very easily and quickly reaches the areas corresponding to large spurious modes.\nInterestingly, all this comes out naturally of the variational bound presented below, rather than as something imposed in addition to the training objective"}, {"section_index": "4", "section_name": "Algorithm 1 VariationalWalkback(0", "section_text": "Train a generative model associated with a transition operator pT(s[s) at temperature T (tempera ture 1 for sampling from the actual model). This transition operator injects noise of variance To2 a each step, where o2 is the noise level at temperature 1.\n2 Tmax data 02\nto achieve that goal. From that point on we are going to continue sampling the \"previous\"' state s. according to pT(s[s' = St+1) while gradually cooling the temperature, e.g. by dividing it by 2 afte each step. In that case we would need.\nK = log2 Tmax\nLet us first consider a way in which our model could approximately generate samples according to. our model and the associated transition operator pT(s[s'). That process would start by sampling. a state sk inside a volume that contains all the data, e.g., with a broad Gaussian p*(sk) whose. variances are set according to the training data. Then we would sample sK-1 from pTmax(s|s' = SK), where Tmax is a high enough temperature so that the noise dominates the signal and is strong. enough to move the state across the whole domain of the data on the visible portion of the state. If o2 is the amount noise injected by the transition operator on the visible units at temperature 1, then. we could pick\nI This analogy with a child was first used in talks by Geoff Hinton when discussing constrastive divergenc. personal communication)\nsteps to reach a temperature of 1. Finally, we would look at the visible portion of so to obtain th. sampled x. In practice, we would expect that a slower annealing schedule would yield samples mor in agreement with the stationary distribution of p1(s[s'), but we explored this aggressive annealin. schedule in order to obtain faster training..\nThe marginal probability of v. x at the end of the above K-step process is thus\nK I qr(8t|St log p(x) = log qTo(x)qT(S1|So(x,)) K t=2 PTo(So = x|S1) 2 PTt(St ds A qT,(x)qT(s1|So = x) IIt=2 qTt(St\nwhere we understand that so = x. Now we can apply Jensen's inequality as usual to obtain the variational bound\nogp(x) > L K 1\nqTt(s[s)= pT(s[s)\nThe only approximations will be on both ends of the sequence:"}, {"section_index": "5", "section_name": "3.1 ESTIMATING THE LOG-LIKELIHOOD USING IMPORTANCE SAMPLING", "section_text": "In practice we cannot compute L exactly (nor its gradient), but we can easily obtain an unbiased estimator of (or of its gradient) by sampling s from the q distributions, i.e., approximate the integral by a single Monte-Carlo sample. This is what is done by the training procedure outlined in Algorithm|1] which thus performs stochastic gradient ascent on the variational bound L, and this wil\np(x) = Jsk PT.(So = x|s1) (IIt=2 PT(St-1|st)) p*(sK)dsK(6)where Tt is an annealing sched- ule with To = 1 and Tk = Tmax and p* is the \"starting distribution', such as the Gaussian of variance Odata: . We can rewrite this as follows by taking the log and multiplying and dividing by an arbitrary distribution g(S1,..., SK) decomposed into conditionals qT. (st St-1):\nK I qr(st| qTo(x)qTS1So = x) t=2 I^=2 PTt PTo(So = x[s1) log qT,xqT(s1|So = x) (IIt=2 qT(st|St-\nThis bound is valid for any q but will be tight when q(sk,sk-1,...,si|so). p(SK, SK-1, ..., S1|so), and otherwise can be used to obtain a variational training objective. Note. that both q and p can be decomposed as a product of one-step conditionals. Here, we can make most of the qr, transition probabilities match their corresponding pr, transition probabilities exactly, i.e.,. for 1 < t < K we use\nSampling exactly from the model's p(v = x) is typically not feasible exactly (it involves the usual posterior inference, e.g., as used in VAEs) but as explained below we will exploit properties of the algorithm to approximate this efficiently. We call the chosen approxima- tion q1(v). At the last step, the optimal qTk (sK|SK-1) is not simply the model's transition operator at temperature Tk, because this conditional also involves the marginal \"starting distribu- tion\" p*(sk). However, because we have picked Tk large enough to make samples from qTmax(sk|Sk-1) dominated by noise of the same variance as that of p*, we expect the approximation to be good too.\ntend to also push up the log-likelihood log p(x) of training examples x. Note that such variationa bounds have been used successfully in many learning algorithms in the past (Kingma & Welling 2013; Lamb et al.]2016).\nWe derive an estimate of the negative log-likelihood by the following procedure. For each training example x, we sample a large number of diffusion paths. We then use the following formulation to estimate the negative log-likelihood.\nlogp(x) = log E x~pD,qTo(x)qT1(S1|so(x,))(IK=2 qTt(st|St-1 K PTo (So = x[S1) 1t=2 PTtSt qr,(x)qT(s1|So = x)(IIt=2qT(St|St-1)\nUp to now we have not specified what the form of the transition operators should be. Two main variants are possible here. Either we directly parametrize the transition operator, like with denoising auto-encoders or generative stochastic networks, or we obtain our transition operator implicitly from some energy function, for example by applying some form of Gibbs sampling or Langevin MCMC to derive a transition operator associated with the energy function.\nAn advantage of the direct parametrization is that it eliminates the constraint to have symmetric weights, which is interesting from the point of view of biological plausibility of such algorithms. An advantage of the energy-based parametrization is that at the end of the day we get an energy function which could be used to compute the unnormalized joint probability of visible and latent variables. However, note that in both cases we can easily get an estimator of the log-likelihood by simply using our lower bound L, possibly improved by doing more expensive inference for pTk (sk SK-1)."}, {"section_index": "6", "section_name": "4.1 PARAMETRIC TRANSITION OPERATOR", "section_text": "In our experiments we considered Bernoulli and isotropic Gaussian transition operators for binar. and real-valued data respectively.\nWhen we sample from the transition operator we do not attempt to pass gradients through the sam. pling operation. Accordingly, backpropagation is performed locally on each step of the walk-back and there is no flow of gradient between multiple walk-back steps..\nAdditionally, we use a \"conservative\"' transition operator that averages its input image together witl the sample from the learned distribution (or takes a weighted average with a fixed a weighting) for the transition operator. Just after parameter initialization, the distribution learned by the transition operator's output is essentially random, so it is very difficult for the network to learn to reconstruct the value at the previous step.\nBernoulli Transition Operator\nFo, Fu, F, are functions (in our case neural networks) which take the previous x value from the. walkback chain and return estimates of the value of and o respectively. T is the temperature which is dependent on the walkback step t. xt-1 is the previous value in the walkback chain..\n)*Xt-1+Q* Fp(Xt-1) p = sigmoid Tt\nu = (1 -a) *Xt-1+Q*F(Xt-1)\n= sigmoid(Tt log(1 + eFs) t-"}, {"section_index": "7", "section_name": "Contrastive Divergence", "section_text": "This algorithm is clearly related to the contrastive divergence algorithm with k = T steps (CD k). The CD-k algorithm approximates the log-likelihood gradient by trying to match the sufficient statistics with the data clamped to the sufficient statistics after k steps of the transition operator. The parameter update is the difference of these sufficient statistics, which also corresponds to pushing down the energy of the data-clamped configuration while pushing up the energy of the random variables after k steps of the transition operator.\nTwo important differences are that, because the temperature is increasing in the variational walk back procedure,\nA third difference is that the learning procedure is expressed in terms of the transition operator rather than directly in terms of the energy function. This allows one to thus train a transition operator directly. rather than indirectly via an energy function"}, {"section_index": "8", "section_name": "Generative Stochastic Networks", "section_text": "The Generative Stochastic Networks (GsN) algorithm proposed byBengio et al.(2013b) learns. a transition operator by iteratively injecting noise and minimizing the reconstruction error afte. a number of transition operator steps starting at a data point, and back-propagating through all. these steps. One thing in common is the idea of using the walkback intuition instead of isotropic. noise in order to converge more efficiently. A major difference is that the algorithm proposed fo. GSNs involves the minimization of overall reconstruction error (from the input data point x to the. sampled reconstruction many steps later). This will tend to blur the learned distribution. Instead the variational walk-back algorithm minimizes reconstruction error one step at a time along the. walk-away trajectory.\nIn addition, the variational walkback GSNs require back-propagating through all the iterated steps like the DRAW algorithm (Gregor et al.|2015). Instead the variational walk-back algorithm only requires back-propagating through a single step at a time of the transition operator. This should make it easier to train because we avoid having to optimize a highly non-linear transformation obtainec by the composition of many transition operator steps.\nThere are two main differences between the Variational Walkback algorithm and the No Equilibrium Thermodynamics:"}, {"section_index": "9", "section_name": "Annealed Importance Sampling (AIS", "section_text": "Annealed Importance Sampling is a sampling procedure. Like variational walkback, it uses ar. annealing schedule corresponding to a range of temperature from infinity to 1. It is used to estimate : partition function. Unlike Annealed Importance Sampling, variational walkback is meant to provide. a good variational lower bound for training a transition operator..\nE(s) do not cancel each other telescopically along the chain from so 1. the energy gradients ds tO ST, 2. as t increases we move more and more randomly rather than following the energy of the model, allowing to hunt more effectively the areas near spurious modes..\n1. Instead of isotropic noise to move away from the data manifold, we propose to use the. model's own transition operator, with the idea that it will \"seek and destroy' the spurious. modes much more efficiently than random moves.. 2. Instead of injecting a fixed amount of noise per time step, we increase the noise as it moves. away from the data manifold, and anneal the noise when we are close to the data manifold.. This way, we can quickly reach the noise prior without loosing the details of the data. Our. model takes significantly fewer steps to walk away and back to the manifold, as compared to the 1000 steps used for Non-Equilibrium Thermodynamics..\nWe evaluated the variational walkback on three datasets: MNIST, CIFAR (Krizhevsky & Hinton 2009), and CelebA (Liu et al.]2015). The MNIST and CIFAR datasets were used as is, but the aligned and cropped version of the CelebA dataset was scaled from 218 x 178 pixels to 78 x 64 pixels and center-cropped at 64 x 64 pixels (Liu et al.|2015). For all of our experiments we used the Adam optimizer (Kingma & Ba]2014) and the Theano framework (A1-Rfou et al.2016). The training procedure and architecture are detailed in appendix|A.\nFigure 1: Samples on MNIST using a Bernoulli likelihood in the transition operator, 8 walkback steps during training, and 13 walkback steps during sampling. On right. Diffusion process for sampling MNIST digits starting from bernoiulli noise. This shows how the variational walkback iteratively generates images starting from a noise prior. For intermediate steps we display samples and for the final step (right) we display the transition operator's mean.\n123\nFigure 2: Variational Walkback Inpainting MNIST the left half of digits conditioned on the righ half. The goal is to fill in the left half of an MNIST digit given an observed right half of an image (drawn from validation set).\nRAISE is a reverse AIS, as it starts from a data point and then increases the temperature. In this way it is similar to the Q-chain in variational walkback. The advantage of RAISE over AIS is that it yields an estimator of the log-likelihood that tends to be pessimistic rather than optimistic, which makes it better as an evaluation criteria\nLike AIS, RAISE estimates the log-likelihood using a form of importance sampling, based on a. product (over the chain) of the ratios of consecutive probabilities (not conditional probabilities from the model). Variational walkback does not work with estimates of the model's unconditional proba- bility, and instead works directly with a conditional probability defined by the transition operator. It. is for this reason that variational walkback does not need to have an explicit energy function)..\nFigure 3: Original Images from CelebA (left), Variational Walkback Reconstructions (middle) and Samples (right).\nFigure 4: Variational Walkback Samples on CIFAR10 (left and right)\nWe reported samples on CIFAR, MNIST, CelebA and inpainting results on MNIST. Our inpainting results on MNIST are competitive with generative stochastic networks and show somewhat higher consistency between the given part of the image and the generated portion (Bengio et al.] 2013c). However, we note that our samples on CIFAR and CelebA show the same \"blurring effect\"' that has been observed with autoencoder-based generative models trained to minimize reconstruction loss Lamb et al.2016)."}, {"section_index": "10", "section_name": "CONCLUSION AND FUTURE WORK", "section_text": "We have introduced a new form of walk-back and a new algorithm for learning transition operators. or undirected graphical models. Our algorithm learns a transition operator by allowing the model tc. walk-away from the data towards the noise prior and then teaching it to actually to have its transitions. trained to go backwards each of these walk-away steps, i.e., towards the data manifold. Variational. walk-back increases the temperature along the chain as it is moving further away from the data. manifold, and inversely, anneals the temperature at generation time, as it gets closer to the estimated. manifold. This allows the training procedure to quickly find and remove dominant spurious modes. Learning a transition operator also allows our model to learn only a conditional distribution at each. step. This is much easier to learn, since it only needs to capture a few modes per step. The model alsc. only locally carves the energy function, which means that it does not have to learn the entire joint. probability distribution, but rather steps towards the right direction, making sure that everywhere it. puts probability mass as well as around the data, the energy gradient is pointing towards the data\nFuture work should extend this algorithm and experiments in order to incorporate latent variables The state would now include both the visible x and some latent h. Essentially the same procedur an be run, except for the need to initialize the chain with a state s' = (x, h) where h would ideall e an estimate of the posterior distribution of h given the observed data point x. Another interestin lirection to expand this work is to replace the log-likelihood objective at each step by a GAN ike objective, thus avoiding the need to inject noise independently on each of the pixels, during one application of the transition operator, and allowing the latent variable sampling to inject al he required high-level decisions associated with the transition. Based on the earlier results fron Bengio et al.(2013a), sampling in the latent space rather than in the pixel space should allow fo etter generative models and even better mixing between modes Bengio et al.(2013b\ncan be run, except for the need to initialize the chain with a state s = (x, h) where h would ideally"}, {"section_index": "11", "section_name": "ACKNOWLEDGMENTS", "section_text": "The authors would like to thank Benjamin Scellier and Aaron Courville for their helpful feedback and discussions, as well as NSERC, CIFAR, Google, Samsung, Nuance, IBM and Canada Research Chairs for funding, and Compute Canada for computing resources.\nIan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair. Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor mation Processing Systems, pp. 2672-2680, 2014.\nDiederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR abs/1412.6980,2014. URLhttp://arxiv.0rg/abs/1412.6980\nAlex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009\nAlex Lamb, Vincent Dumoulin, and Aaron Courville. Discriminative regularization for generative models. arXiv preprint arXiv:1602.03220, 2016\nZiwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of the IEEE International Conference on Computer Vision, pp. 3730-3738, 2015\nAlec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with dee convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.\nDanilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014.\nPascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international. conference on Machine learning, pp. 1096-1103. ACM, 2008.\nGuillaume Alain and Yoshua Bengio. What regularized auto-encoders learn from the data-generating. distribution. Journal of Machine Learning Research. 15(1:3563-3593. 2014\nYoshua Bengio, Eric Thibodeau-Laufer, Guillaume Alain, and Jason Yosinski. Deep generative stochastic networks trainable by backprop. arXiv preprint arXiv:1306.1091, 2013b\nKarol Gregor, Ivo Danihelka, Alex Graves, and Daan Wierstra. Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623, 2015\nJascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsu pervised learning using nonequilibrium thermodynamics. CoRR, abs/1503.03585, 2015. URL"}, {"section_index": "12", "section_name": "ARCHITECTURE DETAILS", "section_text": "The architecture that was used for the CelebA and CIFAR dataset was similar to the architecture used byLamb et al.(2016), with a convolutional encoder followed by two fully connected hidder layers, followed by a decoder with strided convolutions (Radford et al.] 2015). Batch norm was applied in all layers except for the last layer. For all layers except for the last we used the tanl activation function. Surprisingly, we were unable to obtain good results using the RELU or Leaky RELU activation .\nOn the binarized MNIST dataset we used a transition operator with Bernoulli outputs. A feed forward neural network was used to estimate the parameters (per-pixel probabilities) for the Bernoulli outputs. This neural network consisted of a single hidden layer with 4096 hidden units and the tanh activation function."}, {"section_index": "13", "section_name": "WALKBACK PROCEDURE DETAILS", "section_text": "A dynamic approach to setting the number of walkback steps and temperature schedule may be possible, but in this work we set these hyperparameters empirically. We found that during training using a temperature schedule of T = ToV2t produced good results, where To = 1.0 is the initial temperature and t is the step index. During sampling, we found good results using the reverse schedule: T = 2N.\nFor MNIST, we achieved our results using 8 training steps of walkback. For CIFAR, we used 15 training steps and 20 sampling steps. For CelebA, we used 30 training steps and 35 sampling. steps. In general, we found that we could achieve higher quality results by using more steps during sampling then we used during training. We found that more difficult datasets, like CIFAR and. CelebA, required longer walkback chains. Finally, our model is able to achieve results competitive. with Non-Equilibrium Thermodynamics (Sohl-Dickstein et al.||2015), despite that method requiring. chains with far more steps (1000 steps for MNIST).\nThe marginal probability of v :. - x at the end of the above K-step process is thus\nK PTt(St-1|St) p*(sk)ds1 p(x) = K t=1\nThe variational walkback algorithm has three unique hyperparameters. One is the number of walk-. back steps performed during training. Another is the number of walkback steps performed when sampling from the model. Still another is the temperature schedule used during training, reconstruc-. tion, or sampling.\nThe most conservative hyperparameter setting would involve using a large number of walkback steps. during training and slowly increasing the temperature. However, this could make training slow, and if too few steps are used, the end of the walkback chain will not match the noise prior, leading to low quality samples.\nwhere Tt is an annealing schedule with To = 1 and Tk = Tmax and p* is the \"starting distribu- multiplying and dividing by an arbitrary distribution q(s1,..., Sk) decomposed into conditionals qT (St|St-1):\nK q(S0, S1, ..., Sk qTt(St|St-1) t=1\nK It=1 PTSt-1|St = log log p(x) qTo(x qTSt|St K t-1 9Tt t=1\nwhere we understand that so = x. Now we can apply Jensen's inequality as usual to obtain the variational bound\nK lIt=1PTt(St-1|St) log p( qTo(x qT(St|St-1 It=1 9T(St) S t=1 To\nWe present an argument that running the walkback chain for a sufficient number of steps will cause the variational bound to become tight..\nWe want to approximate the posterior\nOSn by telescopic cancellation and definition of T Sn.- OSt T(sn-1 sn.) now by detailed balance. ) Pi(sn-1) by telescopic cancellation 0(St Sm. pS1 Pi(Sn) p(St Pi(s [Sn-1) again by definition of T Pi(St pS1 PS+ DS1 -2P(SnSn-1 PiSt\np(sn) 2 P(Sn- by telescopic cancellation and definition of pSn-1 OSt n-2 T(sn-1|sn) now by detailed balance 1) Pi(Sn-1) by telescopic cancellation T(SnSn-1) OS Pi(Sn) OS+ Sn-1) again by definition of T PiSt S p(St) PiS1\nSo our approximation error in the posterior is the factor\nIf t is large enough, then s1 (being at the end of the generative sequence) has pretty much converged i.e., p(s1) ~ pi(s1).\nIf we throw in temperature annealing along the way (now the notation would have to be changed to. put an index n on both p and T), with the initial temperature being very high, then we can hope that. the initial Gaussian p(st.) is very similar to the stationary distribution at high temperature p(st)\nThese arguments suggest that as we make t larger and the final (initial) temperature larger as wel the approximation becomes better.\nConsider a sequence st, ..., s1 generated in that order by our model p through a sequence of applications of the transition operator T, i.e., p(s1,..., st) = p(st)T(st-1|st)...T(s1|s2), i.e. p(Sn-1Sn) = T(Sn-1Sn), but note that p(sn[Sn-1) p(Sn-1|Sn).\nLet p(s) denote the stationary distribution associated with T. Note that T and p; and related by the detailed balance equation, i.e., T(s[s')p(s') = T(s'|s)p(s)."}] |
rkpACe1lx | [{"section_index": "0", "section_name": "HYPERNETWORKS", "section_text": "David Ha* Andrew M. Dai, Ouoc V. Le\nGoogle Brain\n{hadavid, adai,qvl}@google.com"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "In this work, we consider an approach of using a small network (called a \"hypernetwork\") to generat the weights for a larger network (called a main network). The behavior of the main network is th same as with any usual neural network: it learns to map some raw inputs to their desired targets whereas the hypernetwork takes a set of inputs that contain information about the structure of the weights and generates the weights for that layer.\nThe focus of this work is to use hypernetworks to generate weights for recurrent networks (RNN) In this case, the weights W, for the main RNN at step t is a function of the input to the hidder state of the main RNN at the previous step ht-1 and the input at the current time step xt. This weight-generation scheme allows approximate weight-sharing across layers of the main RNN.\nWe perform experiments to investigate the behaviors of hypernetworks in a range of contexts anc. find that hypernetworks mix well with other techniques such as batch normalization and layer nor malization. Our main result is that hypernetworks can generate non-shared weights for LSTM tha. work better than the standard version of LSTM (Hochreiter & Schmidhuber1997). On language. modelling tasks with character Penn Treebank, Hutter Prize Wikipedia datasets, hypernetworks fo. LSTM achieve near state-of-the-art results. On a handwriting generation task with IAM handwrit. ing dataset, hypernetworks for LSTM achieves good quantitative and qualitative results. On ma chine translation, hypernetworks for LSTM also obtain state-of-the-art performance on the WMT'14 en->fr benchmark."}, {"section_index": "2", "section_name": "2 RELATED WORK", "section_text": "Our approach is inspired by methods in evolutionary computing, where it is difficult to directly op erate in large search spaces consisting of millions of weight parameters. A more efficient metho. is to evolve a smaller network to generate the structure of weights for a larger network, so tha the search is constrained within the much smaller weight space. An instance of this approach i the work on the HyperNEAT framework (Stanley et al.2009). In the HyperNEAT framework Compositional Pattern-Producing Networks (CPPNs) are evolved to define the weight structure o the much larger main network. Closely related to our approach is a simplified variation of Hyper NEAT. where the structure is fixed and the weights are evolved through Discrete Cosine Transforn (DCT), called Compressed Weight Search (Koutnik et al.|2010). Even more closely related to ou"}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "This work explores hypernetworks: an approach of using one network, also known. as a hypernetwork, to generate the weights for another network. We apply hy oernetworks to generate adaptive weights for recurrent networks. In this case nypernetworks can be viewed as a relaxed form of weight-sharing across layers. In our implementation, hypernetworks are are trained jointly with the main net work in an end-to-end fashion. Our main result is that hypernetworks can gener ate non-shared weights for LSTM and achieve state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, hand writing generation and neural machine translation, challenging the weight-sharing. aradigm for recurrent networks..\napproach are Differentiable Pattern Producing Networks (DPPNs), where the structure is evolved but the weights are learned (Fernando et al.2016), and ACDC-Networks (Moczulski et al.]2015), where linear layers are compressed with DCT and the parameters are learned. Most reported results using these methods, however, are in small scales, perhaps because they are both slow to train and require heuristics to be efficient. The main difference between our approach and HyperNEAT is that hypernetworks in our approach are trained end-to-end with gradient descent together with the main network, and therefore are more efficient.\nAnother closely related idea to hypernetworks is the concept of fast weights Schmidhuber(1992 1993) in which one network can produce context-dependent weight changes for a second network. Small scale experiments were conducted to demonstrate fast weights for feed forward networks a the time, but perhaps due to the lack of modern computational tools, the recurrent network versior was mentioned mainly as a thought experiment (Schmidhuber! 1993). A subsequent work demon strated practical applications of fast weights (Gomez & Schmidhuber2005), where a generator network is learnt through evolution to solve an artificial control problem.\nThe focus of this work is to apply our method to recurrent networks. In this context, our metho has a connection to second-order or multiplicative networks (Goudreau et al.f|1994)|Sutskever et al 2011; Wu et al.2016), where the hidden state of the last step and the input vector of the current tim step interact in a multiplicative fashion. The key difference between our approach and second-orde networks is that our approach is more memory efficient because we only learn the scaling factors i the interaction matrix. Furthermore, in second-order or multiplicative networks, the weights of th RNN are not fixed, but a linear function of the previous hidden state. In our work, we explore th use of a smaller RNN, rather than a linear function, to produce the weights of the main RNN.\nThe concept of a network interacting with another network is central to the work of (Jaderberg et al. 2016, Andrychowicz et al.]2016), and especially (Denil et al.2013] Yang et al. 2015}Bertinetto et al. 2016De Brabandere et al.] 2016), where certain parameters in a convolutional network are predicted by another network. These studies however did not explore the use of this approach to recurrent networks, which is a main contribution of our work."}, {"section_index": "4", "section_name": "3.1 HYPERRNN", "section_text": "Our hypernetworks can be used to generate weights for the RNN and LSTM. When a hypernetwork is used to generate the weights for an RNN, we refer to it as the HyperRNN. At every time step t, a HyperRNN takes as input the concatenated vector of input xt and the hidden states of the main RNN ht-1, it then generates as output the vector ht. This output vector is then used to generate the weights for the main RNN at the same timestep. Both the HyperRNN and the main RNN are trained jointly with backpropagation and gradient descent. In the following, we will give a more formal description of the model.\nThe standard formulation of a Basic RNN is given by:\nwhere ht is the hidden state, is a non-linear operation such as tanh or relu, and the weight matrices and bias Wn E IRNnxNn, Wz E RNnNa,b E RNn is fixed each timestep for an input. sequence X = (x1, x2,..., xT)\nIn HyperRNN, we allow Wp, and W. to float over time by using a smaller hypernetwork to generate these parameters of the main RNN at each step (see Figure|1). More concretely, the parameters Wp, W,, b of the main RNN are different at different time steps, so that ht can now be computed as:\nThough it is possible to use hypernetworks to generate weights for feedforward or convolutional networks, in this paper, our focus is on using hypernetworks with recurrent networks. As can be seen below, when they are applied to recurrent networks, hypernetworks can be seen as a form of relaxed weight-sharing in the time dimension.\nht =$(Wnht-1+ Wxxt+b\nht =$(Wn(zn)ht-1+ Wx(zx)xt+b(zb)), where Wp(zn) =(Wnz,Zh Wx(zx) =(Wxz,Zx b(Zb) = WbzZb+ b0\nt-1+Wzxt+b Lh. Lh\nAs the embeddings zh, Zx and z are of dimensions Nz, which is typically smaller than the hidden state size N; of the HyperRNN cell, a linear network is used to project the output of the HyperRNN cell into the embeddings in Equation[3] After the embeddings are computed, they will be used to generate the full weight matrix of the main RNN..\nThe above is a general formulation of HyperRNN. However, Equation 2|is not practical because. the memory usage becomes too large for real problems. We modify the HyperRNN described in. Equation 2|so that it can be more memory efficient. We will use an intermediate hidden vector. d(z) E RNn to parametrize each weight matrix, where d(z) will be a linear function of z. To. dynamically modify a weight matrix W, we will allow each row of this weight matrix to be scaled linearly by an element in vector d. We refer d as a weight scaling vector. Below is the modification to W (z):\nWhile we sacrifice the ability to construct an entire weight matrix from a linear combination of N,. matrices of the same size, we are able to linearly scale the rows of a single matrix with N, degrees of\nb b ht-1 ht Wn 1I Wx Wx Xt1 X+ X t-1 Xt\nFigure 1: An overview of HyperRNNs. Black connections and parameters are associated basic RNNs. Orange connections and parameters are introduced in this work and associated with Hyper-. RNNs. Dotted arrows are for parameter generation..\nWhere Wp E RNaxNn,Wz E RNax(Nn+N2),b E RNn,and Wnh,Wnz,Whb E RNzxNn and bhh, bhx E RNz . This HyperRNN cell has N, hidden units. Typically N, is much smaller than Nh.\ndo(z)Wo d1(z)W1 W(z) = W(d(z)) = d Nr (z)WN\nfreedom. We find this change to have a good trade-off, as this formulation of converting W(z) intc. W(d(z)) decreases the amount of memory required by the HyperRNN. Rather than requiring N,. times the memory of a Basic RNN, we will only be using memory in the order N, times the numbei. of hidden units, which is an acceptable amount of extra memory usage that is often available in many applications. In addition, the row-level operation in Equation|4 can be shown to be equivalent to an element-wise multiplication operator and hence computationally much more efficient in practice. Below is the more memory efficient version of the setup of Equation2\nht =$(dn(zn) O Wnht-1+ dx(zx) O Wxxt+b(zb)), whel dn(Zh) = WnzZh dx(Zx) = WxzZx b(Zb) = WbzZb+ b0\nIn our experiments, we focus on the use of hypernetworks with the Long Short-Term Memory (LSTM) architecture (Hochreiter & Schmidhuber.1997) because LSTM often works better than the Basic RNN. In such case, an LSTM will have more weight matrices and biases, and thus our main change is to have many more d's, each d is being associated with each weight matrix or bias."}, {"section_index": "5", "section_name": "3.2 RELATED APPROACHES", "section_text": "The formulation of the HyperRNN in Equation|5|has similarities to Recurrent Batch Normalizatior Cooijmans et al. 2016) and Layer Normalization (Ba et al. 2016). The central idea for the nor malization techniques is to calculate the first two statistical moments of the inputs to the activatior function, and to linearly scale the inputs to have zero mean and unit variance. After the normaliza tion, an additional set of fixed parameters are learned to unscale the inputs if required.\nSince the HyperRNN cell can indirectly modify the rows of each weight matrix and also the bias of the main RNN, it is implicitly also performing a linear scaling to the inputs of the activation function. The difference here is that the linear scaling parameters will be learned by the HyperRNN cell, and. not based on statistical-moments. We note that the existing normalization approaches can work together with the HyperRNN approach, where the HyperRNN cell will be tasked with discovering a better dynamical scaling policy to complement normalization. We also explore this combination in. our experiments.\nThe element-wise operation also has similarities to the Multiplicative RNN and its extensions (mRNN, mLSTM) (Sutskever et al.2011} Krause et al.|2016) and Multiplicative Integration RNN (MI-RNN) (Wu et al.2016). In the case of the mRNN, the hidden-to-hidden weight matrix is re- placed with a factorized matrix, to allow the weights to be input dependent. The factorization is described below in Equation6(Krause et al.]2016).\nht = $(Wnm(Wmxxt)O(Wmhht-1)+ Wxxt+ b\nFor the MI-RNN approach, a second order term is added to the Basic RNN formulation, along witl. scaling vectors for each term, as described in Equation[7] The addition of the scaling vectors allov parameters to be shared more efficiently..\nht = Q(a O WzxtO Wpht-1 + 1 O Wpht-1 + 2O Wzxt+ b\nIn the HyperRNN approach, the weights are also input dependent. However, unlike mRNN, both weight matrices and also the bias term will be dependent to not only to the inputs, but also to the hidden states. In the MI-RNN approach, the weights are also be augmented by both the input and hidden states, via the second order term in Equation[7] In both mRNN and MI-RNN approaches, the weight augmentation terms are produced by a linear operation, while in the HyperRNN approach the weight scaling vectors d are dynamically produced by another RNN with its own hidden states and non-linearities.\nht =$(dn(zn) O Wnht-1+ dx(zx) O Wxxt+b(zb)), where ln(Zh) = WhzZh lx(Zx) = WxzZx b(zb) = WbzZb+ b0"}, {"section_index": "6", "section_name": "4 EXPERIMENTS", "section_text": "In the following experiments, we will benchmark the performance of HyperLSTM on languag. modelling with Penn Treebank, and Hutter Prize Wikipedia. We will also benchmark the metho on the tasks of handwriting generation with IAM On-Line Handwriting Database, and machine. translation with WMT'14 en->fr.\nWe first evaluation the HyperLSTM model on a character level prediction task with the Penn Tree bank corpus (Marcus et al.]1993) using the train/validation/test split outlined in (Mikolov et al. 2012). As the dataset is quite small, we apply dropout on both input and output layers with a keep probability of 90%. Unlike previous approaches (Graves2013, Ognawala & Bayer2014) of apply ing weight noise during training, we instead also apply dropout to the recurrent layer (Henaff et al. 2016) with the same dropout probability.\nWe compare our model to the basic LSTM cell, stacked LSTM cells (Graves2013), and LSTM with layer normalization applied. In addition, we also experimented with applying layer normalizatior. to HyperLSTM. Using the setup in (Graves2013), we use networks with 1000 units and trair. the network to predict the next character. In this task, the HyperLSTM cell has 128 units and ar embedding size of 4. As the HyperLSTM cell has more trainable parameters compared to the basic LSTM cell, we also experimented with an LSTM cell with 1250 units..\nFor character-level Penn Treebank, we use mini-batches of size 128, to train on sequences of length 100. We train the model using Adam (Kingma & Bal[2015) with a learning rate of 0.001 and gradient. clipping of 1.0. During evaluation, we generate the entire sequence, and do not use information about previous test errors for prediction, e.g., dynamic evaluation (Graves][2013] Rocki]2016b). As mentioned earlier, we apply dropout to the input and output layers, and also apply recurrent dropout. with a keep probability of 90%. For baseline models, orthogonal initialization (Henaff et al.2016) is used for all weights..\nWe also experiment with a version of the model using a larger embedding size of 16, and also with a lower dropout keep probability of 85%, and report results with this \"Large Embedding\" mode in Table[1 Lastly, we stack two layers of this \"Large Embedding\" model together to measure the benefits of a multi-layer version of HyperLSTM, with a dropout keep probability of 80%.\nModel' TestValidation Param Coun. ME n-gram (Mikolov et al.2012 1.37 Batch Norm LSTM Cooijmans et al.. 2016 1.32 Recurrent Dropout LSTM (Semeniuta et al.. 2016 1.301 1.338 Zoneout RNN (Krueger et al.|2016) 1.27 HM-LSTM' Chung et al. (2016) 1.27 LSTM, 1000 units 2 1.312 1.347 4.25 M LSTM, 1250 units2 1.306 1.340 6.57 M 2-Layer LSTM, 1000 units2 1.281 1.312 12.26 M Layer Norm LSTM, 1000 units2 1.267 1.300 4.26 M HyperLSTM (ours), 1000 units 1.265 1.296 4.91 M Layer Norm HyperLSTM, 1000 units (ours). 1.250 1.281 4.92 M Layer Norm HyperLSTM, 1000 units, Large Embedding (ours) 1.233 1.263 5.06 M 2-Layer Norm HyperLSTM, 1000 units. 1.219 1.245 14.41 M\nTable 1: Bits-per-character on the Penn Treebank test set.\nOur results are presented in Table[1] The key observation here is that 1) HyperLSTM outperform. standard LSTM and 2) HyperLSTM also achieves similar improvements compared to Layer Normal. ization. The combination of Layer Normalization and Hyper LSTM achieves the best test perplexity. so far on this dataset.\nWe train our model on the larger and more challenging Hutter Prize Wikipedia dataset, also knowr. as enwik8 (Hutter2012) consisting of a sequence of 100M characters composed of 205 unique. characters. Unlike Penn Treebank, enwi k8 contains some foreign words (Latin, Arabic, Chinese). indented XML, metadata, and internet addresses, making it a more realistic and practical dataset tc test character language models..\nOur setup is similar in the previous experiment, using the same mini-batch size, learning rate, weight initialization, gradient clipping parameters and optimizer. We do not use dropout for the input and output layers, but still apply recurrent dropout with a keep probability of 90%. Similar to (Chung et al.]2015), we train on the first 90M characters of the dataset, use the next 5M as a validation set for early stopping, and the last 5M characters as the test set..\nModell enwik8 Param Count. Stacked LSTM (Graves 2013 1.67 27.0 M MRNN (Sutskever et al. 201 1.60 Grid-LSTM (Kalchbrenner et al. 2016 1.47 16.8 M LSTM (Rocki||2016b 1.45 MI-LSTM (Wu et al.T2 2016 1.44 MLSTM (Krause et al.]2016) 1.42 Recurrent Highway Networks (Zilly et al.]2016 1.42 8.0 M Recurrent Memory Array Structures (Rocki)|2016a 1.40 HM-LSTM3 (Chung et al.2016) 1.40 Surprisal Feedback LSTM* Rocki 2016b 1.37 LSTM, 1800 units, no recurrent dropout?. 1.470 14.81 M LSTM, 2000 units, no recurrent dropout? 1.461 18.06 M Layer Norm LSTM, 1800 units2. 1.402 14.82 M HyperLSTM (ours), 1800 units 1.391 18.71 M Layer Norm HyperLSTM, 1800 units (ours). 1.353 18.78 M Layer Norm HyperLSTM, 2048 units (ours). 1.340 26.54 M\nTable 2: Bits-per-character on the enwik8 test set\nThe results are summarized in Table[2] As can be seen from the table, HyperLSTM is once agaii competitive to Layer Norm LSTM, and if we combine both techniques, the Layer Norm HyperL STM achieves respectable results. The large version of HyperLSTM with normalization that use 2048 hidden units achieve near state-of-the-art performance for this task. In addition, HyperLSTM converges more quickly compared to LSTM and Layer Norm LSTM (see Figure2).\nWe perform additional analysis to understand the behavior of HyperLSTM by visualizing how the weight scaling vectors of the main LSTM change during the character sampling process. In Figure|3 we examine a sample text passage generated by HyperLSTM after training on enwi k8 along with the weight differences below the text. We see that in regions of low intensity, where the weight of the main LSTM are relatively static, the types of phrases generated seem more deterministic For example, the weights do not change much during the words Europeans, pos ses sions and reservation. The regions of high intensity is when the HyperLSTM cell is making relatively large changes to the weights of the main LSTM.\nI We do not compare against methods that use dynamic evaluation. 2Our implementation. 3Based on results of version 2 at the time of writing.http: / /arxiv. org/abs/1609. 01704v2 4This method uses information about test errors during inference for predicting the next characters, henc t is not directly comparable to other methods that do not use this information..\nAs enwi k 8 is a bigger dataset compared to Penn Treebank, we will use 1800 units for our networks. We also perform training on sequences of length 250. Our normal HyperLSTM cell consists of 256. units, and we use an embedding size of 64. To improve results, we also experiment with a larger model where both HyperLSTM and main network both have 2048 hidden units. The HyperLSTM. cell consists of 512 units with an embedding size of 64. We also apply recurrent dropout to this. larger model, with dropout keep probability of 85%, and train on a longer sequence length of 300..\n2.25 -800 -LSTM 2.15 LSTM -850 2 Layer LSTM 2.05 Layer Norm LSTM sso7 -900 Layer Norm LSTM 1.95 HyperLSTM -950 - HyperLSTM 1.85 607 1.75 - Layer Norm HyperLSTM -1000 1.65 -1050 1.55 -1100 1.45 -1150 1.35 1.25 -1200- 0 10 20 30 40 50 60 70 80 2.5 22.5 42.5 62.5 82.5 102.5 Training Step (x1000) Training Step (x1000)\nFigure 2: Loss graph for enwi k8 (left). Loss graph for Handwriting Generation (right\nIn 1955-37 most American An [Japan (Korea [Japan]], the Mayotte I1ke Constantino ple (in its [[880]] that as the inued sequel toget her orde Gal churches and [ [Me1 ito de la Vegeta Provine|Felix]] had broken Diocletian the full victory of Augustus, cited by Stephen I.. Alexander Se nate became Princess Cartara, an annual of war 777-184 and founded numerous of justice practitioners.\nFigure 3: Example text generated from HyperLSTM model. We visualize how four of the main RNN's weight matrices (Wj, W, Wf, W) effectively change over time by plotting the norm of. the changes below each generated character. High intensity represent large changes being made to. weights of main RNN."}, {"section_index": "7", "section_name": "4.3 HANDWRITING SEOUENCE GENERATION", "section_text": "We will use the same model architecture described in (Graves 2013) and use a Mixture Densit Network layer (Bishop1994) to generate a mixture of bi-variate Gaussian distributions to model a. each time step to model the pen location. We normalize the data and use the same train/validatio. split as per (Graves2013) in this experiment. We remove samples less than length 300 as we foun. these samples contain a lot of recording errors and noise. After the pre-processing, as the dataset i. small, we introduce data augmentation of chosen uniformly from +/- 10% and apply a this randon. scaling a the samples used for training..\nFor model training, will apply recurrent dropout and also dropout to the output layer with a keep probability of 0.95. The model is trained on mini-batches of size 32 containing sequences of variable. length. We trained the model using Adam (Kingma & Ba]2015) with a learning rate of 0.0001 and. gradient clipping of 5.0. Our HyperLSTM cell consists of 128 units and a signal size of 4. For. baseline models, orthogonal initialization (Henaff et al.]2016) is performed for all weights..\nIn addition to modelling discrete sequential data, we want to see how the model performs when modelling sequences of real valued data. We will train our model on the IAM online handwrit- ing database (Liwicki & Bunke 2005) and have our model predict pen strokes as per Section 4.2 of (Graves2013). The dataset has contains 12179 handwritten lines from 221 writers, digitally recorded from a tablet. We will model the (x, y) coordinate of the pen location at each recorded time step, along with a binary indicator of pen-up/pen-down. The average sequence length is around 700 steps and the longest around 1900 steps, making the training task particularly challenging as the network needs to retain information about both the stroke history and also the handwriting style in order to predict plausible future handwriting strokes.\nModel Log-Loss Param Cour LSTM, 900 units (Graves 2013 -1,026 3-Layer LSTM, 400 units s(Graves2013) -1,041 3-Layer LSTM, 400 units, adaptive weight noise (Graves 2013 -1,058 LSTM, 900 units, no dropout, no data augmentation.' -1,026 3.36 M 3-Layer LSTM, 400 units, no dropout, no data augmentation. -1,039 3.26 M LSTM, 900 units2 -1,055 3.36 M LSTM, 1000 units2 -1,048 4.14 M 3-Layer LSTM, 400 units2 -1,068 3.26 M 2-Layer LSTM, 650 units? -1,135 5.16 M Layer Norm LSTM, 900 units2 -1,096 3.37 M Layer Norm LSTM, 1000 units2 -1,106 4.14 M Layer Norm HyperLSTM, 900 units (ours) -1,067 3.95 M HyperLSTM (ours), 900 units -1,162 3.94 M\nTable 3: Log-Loss of IAM Online DB validation set\nThe results are summarized in Table|3] Our main result is that HyperLSTM with 900 units, without Layer Norm, achieves best log loss on the validation set across all models in published work and in our experiment. HyperLSTM also converges more quickly compared to other models (see Figure[2)\nSimilar to the earlier character generation experiment, we show a generated handwriting sample from the HyperLSTM model in Figure[4] along with a plot of how the weight scaling vectors of the main RNN is changing over time below the sample. For a more detailed interactive demonstration of handwriting generation using HyperLSTM, visit http://b1og.otoro.net/2016/09/287\nFigure 4: Handwriting sample generated from HyperLSTM model. We visualize how four of the main RNN's weight matrices (Wh, W, Wf, W) effectively change over time, by plotting norm. of changes made to them over time..\nWe observe that the regions of high intensity is concentrated at many discrete instances, rather than slowly varying over time. This implies that the weights experience regime changes rather than gradual slow adjustments. We can see that many of these weight changes occur at the boundaries between words, and between characters. While the LSTM model alone already does a decent job of generating time-varying parameters of a Mixture Gaussian distribution used to generate realistic handwriting samples, the ability to go one level deeper, and to dynamically generate the generative model is one of the key advantages of HyperLSTM over a normal LSTM."}, {"section_index": "8", "section_name": "4.4 NEURAL MACHINE TRANSLATION", "section_text": "Finally, we experiment with the Neural Machine Translation task using the same experimental setup outlined in (Wu et al.||2016). Our model is the same wordpiece model architecture with a vocabulary size of 32k, but we replace the LSTM cells with HyperLSTM cells. We benchmark both models on WMT'14 En->Fr using the same test/validation set split described in the GNMT paper (Wu et al. 2016). The GNMT network has 8 layers in the encoder, 8 layers in the decoder. The first layer of the encoder has bidirectional connections. The attention module is a neural network with 1 hidden layer. When a LSTM cell is used, the number of hidden units in each layer is 1024. The model is trained in a distributed setting with a parameter sever and 12 workers. Additionally, each worker uses 8 GPUs and a minibatch of 128.\nOur experimental setup is similar to that in the GNMT paper (Wu et al.|2016), with two simpli fications. First, we use only Adam without SGD at the end. Adam was used with the same same hyperparameters described in the GNMT paper: learning rate of O.0002 for 1M training steps.\nWe apply the HyperLSTM cell with Layer Norm to the GNMT architecture that uses a vocabular of 32K wordpieces. We keep the same number of hidden units, which means that our model wil have 16% more parameters.\nModel Test BLEU Log Perplexity Param Coun Deep-Att + PosUnk (Zhou et al.]2016 39.2 GNMT WPM-32K, LSTM 7Wu et al.f2 2016) 38.95 1.027 280.7 M GNMT WPM-32K, ensemble of 8 LSTMs(Wu et al.J|2016) 40.35 2,246.1 M GNMT WPM-32K, HyperLSTM (ours) 40.03 0.993 325.5 M\nTable 4: Single model results on WMT En->Fr (newstest2014\nThe results are reported in Table[4] which shows that the HyperLSTM cell improves the performance. of the existing GNMT model, achieving state-of-the-art single model results for this dataset. In ad dition, we demonstrate the applicability of hypernetworks to large-scale models used in production. Systems."}, {"section_index": "9", "section_name": "5 CONCLUSION", "section_text": "In this paper, we presented a method to use one network to generate weights for another neural. network. Our hypernetworks are trained end-to-end with backpropagation and therefore are efficient. and scalable. We focused on applying hypernetworks to generate weights for recurrent networks.. On language modelling and handwriting generation, hypernetworks are competitive to or sometimes. better than state-of-the-art models. On machine translation, hypernetworks achieve a significant gain on top of a state-of-the-art production-level model.."}, {"section_index": "10", "section_name": "ACKNOWLEDGMENTS", "section_text": "Jimmy L. Ba, Jamie R. Kiros, and Geoffrey E. Hinton. Layer normalization. NIPs, 2016\nLuca Bertinetto, Joao F. Henriques, Jack Valmadre, Philip H. S. Torr, and Andrea Vedaldi. Learning feed-forward one-shot learners. In NIPS, 2016..\nChristopher M. Bishop. Mixture density networks. Technical report, 1994.\nTim Cooijmans, Nicolas Ballas, Cesar Laurent, and Caglar Gulcehre. Recurrent Batch Normaliza tion. arXiv:1603.09025, 2016.\nTest BLEU Log Perplexity Param Count\nunyoung Chung, Sungjin Ahn, and Yoshua Bengio. Hierarchical multiscale recurrent neural net works. arXiv preprint arXiv:1609.01704, 2016.\nBert De Brabandere, Xu Jia, Tinne Tuytelaars, and Luc Van Gool. Dynamic filter networks. In NIPS, 2016.\nMark W Goudreau. C Lee Giles. Srimat T Chakradhar. and D Chen. First-order versus second-orde single-layer recurrent neural networks. IEEE Transactions on Neural Networks. 1994\nAlex Graves. Generating ences with recurrent neural networks. arXiv:1308.0850, 2013\nSepp Hochreiter and Juergen Schmidhuber. Long short-term memory. Neural Computation, 1997\nMarcus Hutter. The human knowledge compression contest. 2012. URL http://prize hutter1.net/\nMax Jaderberg, Wojciech Marian Czarnecki, Simon Osindero, Oriol Vinyals, Alex Graves, anc Koray Kavukcuoglu. Decoupled Neural Interfaces using Synthetic Gradients. arXiv preprini arXiv:1608.05343, 2016\nNal Kalchbrenner, Ivo Danihelka, and Alex Graves. Grid long short-term memory. In ICLR, 2016\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015\nJan Koutnik, Faustino Gomez, and Jurgen Schmidhuber. Evolving neural networks in compressed weight space. In GECCO, 2010.\nMarcus Liwicki and Horst Bunke. IAM-OnDB - an on-line English sentence database acquired from handwritten text on a whiteboard. In ICDAR, 2005\nTomas Mikolov, Ilya Sutskever, Anoop Deoras, Hai-Son Le, Stefan Kombrink, and Jan Cernocky Subword language modeling with neural networks. preprint, 2012.\nSaahil Ognawala and Justin Bayer. Regularizing recurrent networks-on injected noise and norm based methods. arXiv preprint arXiv:1410.5684. 2014\nKamil Rocki. Recurrent memory array structures. arXiv preprint arXiv:1607.03085, 2016a\nJurgen Schmidhuber. Learning to control fast-weight memories: An alternative to dynamic recurrent networks. Neural Computation, 4(1):131-139. 1992\nJurgen Schmidhuber. A 'self-referential' weight matrix. In ICANN. 1993\nMitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313-330, 1993.\nKenneth O. Stanley, David B. D'Ambrosio, and Jason Gauci. A hypercube-based encoding fo. evolving large-scale neural networks. Artificial Life, 15(2):185-212, 2009.\nIlya Sutskever, James Martens, and Geoffrey E. Hinton. Generating text with recurrent neural net works. In ICML, 2011.\nY. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao,. K. Macherey, J. Klingner, A. Shah, M. Johnson, X. Liu, L. Kaiser, S. Gouws, Y. Kato, T. Kudo H. Kazawa, K. Stevens, G. Kurian, N. Patil, W. Wang, C. Young, J. Smith, J. Riesa, A. Rudnick O. Vinyals, G. Corrado, M. Hughes, and J. Dean. Google's Neural Machine Translation System:. Bridging the Gap between Human and Machine Translation. ArXiv e-prints, 2016..\nYuhuai Wu, Saizheng Zhang, Ying Zhang, Yoshua Bengio, and Ruslan Salakhutdinov. On multi plicative integration with recurrent neural networks. NIPS, 2016..\nJulian Zilly, Rupesh Srivastava, Jan Koutnik, and Jurgen Schmidhuber. Recurrent highway networks arXiv preprint arXiv:1607.03474, 2016."}, {"section_index": "11", "section_name": "A APPENDIX", "section_text": "he eastern half of Russia varies from Modern to Central Europe. Due to similar lighting and the extent of the combination of long tributaries to the [[Gulf of Boston]l, it is more of a private warehouse than the [[Austro-Hungarian Orthodox Christian and Soviet Union] ].\n-=Demographic data base=-\n[[Image:Auschwitz controversial map.png|frame|The ''Austrian Spelling''] [[Image:Czech Middle East SsR chief state 103.JPG|thumb|Serbian Russia movement]] [[1593]]&ndash;[[1719]], and set up a law of [[ parliamentary sovereigntyll and unity in Eastern churches. In medieval Roman Catholicism Tuba and Spanish controlled it until the reign of Burgundian kings and resulted in many changes in multiculturalism, though the [[Crusades]l, usually started following the [[Treaty of Portugal]l, shored the title of three major powers, only a strong part. [[French Marines]] (prompting a huge change in [[President of the Counci of the Empire]], only after about [[1793]l, the Protestant church, fled to the perspective of his heroic declaration of government and, in the next fifty years, [[Christianity|Christian]] and [[Jutland]]. Books combined into a well-published work by a single R. (Sch. M. ellipse poem) tradition in St Peter also included 7:l, he dwell upon the apostle, scripture and the latter of Luke; totally unknown, a distinct class of religious congregations that describes in number o [[remor]]an traditions such as the [[Germanic tribes]] (Fridericus or Lichteusen and the Wales). Be introduced back to the [[14th centuryll, as related in the [[New Testament]] and in its elegant [[ Anglo-Saxon Chroniclell, although they branch off the characteristic traditions which Saint [[Philip of Macedon]l asserted. Ae also in his native countries. In [[1692]l, Seymour was barged at poverty of young English children, which cost almost the preparation of the marriage to him. Burke's work was a good step for his writing, which was stopped by clerg in the Pacific, where he had both refused and received a position o successor to the throne. Like the other councillors in his will, th elder Reinhold was not in the Duke, and he was virtually non-father\nAe also in his native countries.\nI n 1692 Seymour was barged at poverty of young English children,. which cost almost the preparation of the marriage to him.. Burke's work was a good step for his writing, which was stopped by clergy. in the Pacific, where he had both refused and received a position of. successor to the throne. Like the other councillors in his will, the elder Reinhold was not in the Duke, and he was virtually non-father. of Edward I, in order to recognize [[Henry II of England|Queen Enrie. 1l of Parliament. The Melchizedek Minister Qut]] signed the [[Soviet Union]], and forced. Hoover to provide [[Hoover (disambiguation) |hoover]]s in [[1844]],. [ [1841] ] . His work on social linguistic relations is divided to the several times of polity for educatinnisley is 760 Li Italians. After Zaiti's death. , and he was captured August 3, he witnessed a choice better by. public, character, repetitious, punt, and future..\nFigure 5: enwi k8 sample generated from 2048-unit Layer Norm HyperLSTM\nexual intimacy was traditionally performed by a male race of the [[ mitochondriall of living things. The next geneme is used by ''. Clitoron'' into short forms of [[sexual reproduction]]. When a. maternal suffeach-Lashe]] to the myriad of a "master's characte ". He recognizes the associated reflection of [[force call|. carriers]l, the [[Battle of Pois except fragile house and by. historians who have at first incorporated his father..\nFigure 6: enwik 8 sample generated from 2048-unit Layer Norm HyperLSTM\nA.2 EXAMPLES OF RANDOMLY CHOSEN GENERATED HANDWRITING SAMPLES\nWeoh onmer! Vi Ihe M ocity z zl\nWelFWWos Ceacl shlusll ^yiu\nl6vnJ Te hem|n padp L5m|+YR oo\\onru a4i5\nev ruC+rLicH/0u kco0y nOdd _t 7Or cr5 D0e^ S\nzluc L Wi 7C\nOf steclm0V_Co1hi5)HroCr tnthih;ep9cd7n\nCOfriep - Brnm H\\RnccFtoreq0r hA 6 wf\nNI dcafede boeerrers. 1S-\nFigure 7: Handwriting samples generated from LSTM\nn Mevh 0nmens Vi1he if+ tocr+y z zluyaiond'Qucod WWos MLQAylo CeaCl fhlusll ^yiu WelL nOdd_i Roras hoeno^ Perrucauicd!ov rcooy ycalnes puy Nn2jmy AmiynnloerebloCro H 2lUc Lg wit0 Df s+ecLmoV_Co+hi5)hs9OK tnthh{ep%odrnlFr! losaRSow foso qwote we Cowf K edanls COrriep- Bronm+knccF+orern1 fey hutw loul icsy wkl dcafede bQeerrcens.i Dno U|ee 7epfnparas`nTuc s.UI.1NrM20y4Q&nnq nD.lu 1n a3,J Figure 7: Handwriting samples generated from LSTM\nU|ee Jep fOnearag'n Tue s.UI 1 NrM=0yaene nD.lu1n a7, J\nNelq pnunc r0r b4 mir we] jelin fou7he3\nD\nAs clisF Wyerc iue.\nhvq narg[su0] sevIh 5peprocpo segGriu dech\nw4cv cs; C7\nnQcouoyseoe\ne dd 1 wcl QG\nFigure 8: Handwriting samples generated from Layer Norm LSTM\nwF Helwfn hurSyc Bnu)a anceshucWnwy Vre fras b= nir+C ry Mg we1;edn rourms J ncl4V/haunc ouU heug hisonRer rxaunk FesZlwmce Pr+-ce As alist W5erc iueLs[csan!at< J wr d`tc fayC Cs QFFo{xZuirR yBJde he M anHhe tix I foledy 1 wcf Qig W1'nedetew'qa q\\o\nUc n hur<y C BmD8 anceHe\n`cCo1 Qu fcc Hevq W2iVojs w IzLy(f Li'eneuq7u oos6\n1he h qe cUed on+hnc +huy rqenxf -e 4f5f OR epo/\nDow\ncVic+5ivic|en+itapqt p<ongnis yngt Cac hos Cculed, 0d lop f ovebd the ithe weoqaxts Shan pJroMin to Tho G n0m KChey felce5yet medPad tha j1ybeuce!edah SoUS ch f31e ffF;jg Ounber nMne inlsaived moere k cor.Kosroert^e FvecaneK3. neuitn grmh< Fino lcWe bulr( Aorgeoedtiythq\nrm wAemue, oe1neetica AdCemg unis bn v0wS\nfuye thedn c\\rehfetcce the lern hUe So n^ce Ln\nMa#SUsiOreFisle w aly cahktm cu1 f0Hbh Oh\n{cetd atr Lh gacf re s sQiso 9ul\nVic+Svie|en+Nta pqt pe0uguis yngt CaR hos CCued, 0d\nAe ifGo weoqq\nShoxn psrn iu to Tho G nom kche FeLCesx yqt meJPad. fha\nCtos roerthe Fvecahq k3) Ved. meerre&\ncWe bulrl Firs neui ton V YaLo.\nFigure 9: Handwriting samples generated from HyperLSTM\nVAbe hos FO Gn\npoce!qdg h SoVS 9 r N\nWe randomly selected translation samples generated from both LSTM baseline and HyperLSTM models from the WMT'14 En->Fr Test Set. Given an English phrase, we can compare between th correct French translation, the LSTM translation, and the HyperLSTM translation\nEnglish Input\nFrench (Ground Truth)\nsuivant , Anne Whyte a dit : < Si quelqu' un doit ne pas enfreindre 1a 1oi C' est un avocat criminel\nFrench (Ground Truth\nHyperLSTM Translation\nSelon elle , le ScRs a ete invite a une mediation et elle a demand une periode de reflexion supplementaire .."}, {"section_index": "12", "section_name": "HyperLSTM Translation", "section_text": "Les relations entre les Etats-Unis et 1' Allemagne ont ete mises a rude epreuve apres que la NsA a attaque le telephone de la chanceliere Angela Angela ."}, {"section_index": "13", "section_name": "HyperLSTM Translation", "section_text": "J' etais au milieu de la soiree ce soir-la et a la television 1 1endemain"}] |
H1Go7Koex | [{"section_index": "0", "section_name": "CHARACTER-AWARE ATTENTION RESIDUAL NET WORK FOR SENTENCE REPRESENTATION", "section_text": "Xin Zheng\nNanyang Technological University, Singapore SAP Innovation Center, Singapore.\nText classification in general is a well studied area. However, classifying short. and noisy text remains challenging. Feature sparsity is a major issue. The quality. of document representation here has a great impact on the classification accuracy.. Existing methods represent text using bag-of-word model, with TFIDF or other. weighting schemes. Recently word embedding and even document embedding. are proposed to represent text. The purpose is to capture features at both word. level and sentence level. However, the character level information are usually ig. nored. In this paper, we take word morphology and word semantic meaning into. consideration, which are represented by character-aware embedding and word dis-. tributed embedding. By concatenating both character-level and word distributed embedding together and arranging words in order, a sentence representation ma-. trix could be obtained. To overcome data sparsity problem of short text, sentence. representation vector is then derived based on different views from sentence repre-. sentation matrix. The various views contributes to the construction of an enriched sentence embedding. We employ a residual network on the sentence embedding. to get a consistent and refined sentence representation. Evaluated on a few short. text datasets, our model outperforms state-of-the-art models."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "For text classification, a popular feature representation method is bag-of-word. However, this rep. resentation has an intrinsic disadvantage that two separate features will be generated for two words with the same root or of different tenses. Lemmatization and stemming could be applied to partially address this problem, but may not always leads to correct results. For example, \"meaningful'' and. \"meaningless\" would both be considered as \"meaning\" after applying lemmatization or stemming. algorithms, while they are of opposite meanings. Thus, word morphology could also provide useful information in document understanding, particular in short text where the information redundancy. is low.\n*The two authors contribute the same for the work\nZhenzhou Wu"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "For short text, an important issue is data sparsity, particularly when utilizing feature representation method like bag-of-word, regardless the weighting scheme. Therefore, various distributed word representation like Word2Vec (Mikolov et al.]2013) and document representation Doc2Vec (Le & Mikolov 2014) have been proposed to address the problem. However, this kind of method miss the word morphology information and word combination information. To deal with these issues, we propose a model which could capture various kinds of features that could benefit classification task.\nIn this paper, we look deep into characters. We learn character representation and combine both character-level (Zhang et al.|[2015) and word-level embedding to represent a word. Thus both mor- phology and semantic properties of the word are captured. As we know, not all the words in a sen- tence contribute the same when predicting the sentence's label. Therefore, highlight the relatively pertinent information would give better chance of correct prediction. Attention mechanism (Mnih et al.2014]Bahdanau et al. 2016) which focuses on specific part of input could help achieve this gOal. The applications of attention mechanism are mostly on sequential model, while we employ\nthe idea of attention on a feed-forward network (Raffel & Ellis]2015). By multiplying the weigh assigned by attention mechanism to its corresponding word vector, a weighted feature matrix coul be constructed by concatenating the sequence of word embeddings in a sentence.\nShort text usually could not provide much useful information for class prediction. We try differe views to extract as much information as possible to construct an enriched sentence representatic vector. Specifically, to convert a sentence representation matrix to an enriched vector, we draw tv ypes of features. The first one is based on word feature space and the other one is based on n-grar However, not all the features contribute the same on sentence classification. Attention mechanism applied to focus on the significant features. Since these features come from different views, we nee a method to make the elements consistent. The residual network proposed in (He et al.]2015) 201 achieve much better results on image classification task. In other words, the residual mechanis could construct better image representation. Therefore, we adopt residual network to refine tl sentence representation vector. Once we obtain a good quality representation for the sentence, will be delivered to a classifier."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "Deep convolutional neural network suggests benefits in image classification (Krizhevsky et al.2012 Sermanet et al.|2013). Therefore, many research also try to apply it on text classification problem. Kim(2014) propose a model similar to Collobert et al.(2011) architecture. However, they employ two channels of word vectors. One is static throughout training and the other is fine-tuned via back propagation. Various size of filters are conducted on both channel, and the results are concatenatec. together. Then max-pooling over time is taken to select the most significant feature among eacl filter. The selected features are concatenated as the sentence vector..\nSimilarly,Zhang et al.(2015) also employ the convolutional networks but add character-level infor-. mation for text classification. They design two networks, one large and one small. Both of them have nine layers including six convolutional layer and three fully-connected layers. Between the three fully connected layers they insert two dropout for regularization. For both convolution and max-pooling layers, they employ 1-D version (Boureau et al.]2010). After convolution, they add. the sum over all the results from one filter as the output. Specially, they claim 1-D max-pooling. enable them to train a relatively deep network (Boureau et al.)..\nBesides applying models directly on testing datasets, more aspects are considered when extracting features. Character-level feature is adopted in many tasks besides Zhang et al.(2015) and most o them achieve quite good performance.dos Santos & Zadrozny(2014) take word morphology an shape into consideration which have been ignored for part-of-speech tagging task. They suggest the intra-word information is extremely useful when dealing with morphologically rich languages. The adopt neural network model to learn the character-level representation which is further delivered t help word embedding learning.Kim et al.(2016) construct neural language model by analysi of word representation obtained from character composition. Results suggest the model could be encode semantic and orthographic information from character level.\nAttention model is also utilized in our model, which is used to assign weights for each parts oj. components. Usually, attention model is used in sequential model (Rocktaschel et al.|2015) Mnih. et al.]2014] Bahdanau et al.]2016] Kadlec et al.2016). The attention mechanism includes sensor, internal state, actions, and reward. At each time, the sensor will capture a glimpse network which. only focus on a small part of the network. Internal state will summarize the extracted information Actions decides the location for the next step and reward suggests the benefit when taking the action. In our condition, we adopt a simplified attention network as (Raffel & Ellis]2015) 2016). We do no1. need to guess the next step location and just give a weight on each components which indicates the. significance of the element.\nResidual network (He et al.]2015] 2016] Chen et al.]2016) is known to be able to make neural network deeper and relieve degradation problem at the same time. And residual network in (He et al. 2015) outperforms the state-of-the-art models on image recognition. He et al. (2016) introduces\nThere are many traditional machine learning methods for text classification and most of them could achieve quite good results on formal text datasets. Recently, many deep learning methods are pro- posed to solve the text classification task (Zhang et al. 2015f[dos Santos & Gatti]2014]Kim2014)\nResidual Final sentence Sentence length Sentence network vector v' Vector v iteration qy F(v) + V W 0 qc R Concatenate D Max Sentence vector Convolution pooling constructor over filters\nFigure 1: Illustration of the proposed model. qc is the character-level embedding vector for the word. and qw is the word embedding generated according to (Mikolov et al.]2013). Column of Q is the. concatenation of qc and qw. The row length is sentence length. The grey box for sentence vector constructor is illustrated in Figure|2|and the grey box for Residual network iteration is illustrated in. Figure 3(b)\nhow to make the residual block more efficient on image classification. Similarly, for short text classification problem, the quality of sentence representation is also quite important for the final result. Thus, we try to adopt the residual block as in (He et al.[2015f 2016) to refine the sentence. Vector."}, {"section_index": "4", "section_name": "CHARACTER-AWARE ATTENTION RESIDUAL NETWORK", "section_text": "In this paper, we propose a character-aware attention residual network to generate sentence represen-. tation. Figure[1illustrates the model. For each word, the word representation vector is constructed. by concatenating both character-level embedding and word semantic embedding. Thus a sentence. is represented by a matrix. Then two types of features are extracted from the sentence matrix to construct the enriched sentence representation vector for short text. However, not all the features contribute the same for classification. Attention mechanism is employed to target on pertinent parts To make features extracted from different views consistent, a residual network is adopt to refine the sentence representation vector. Thus, an enriched sentence vector is obtained to do text classifica tion."}, {"section_index": "5", "section_name": "3.1 WORD REPRESENTATION CONSTRUCTION", "section_text": "where v, is a binary column vector with 1 only at the c,-th place and O for other positions. Here, we fix the word length dc and take zero-padding when necessary..\nFor each of such matrix Ew, a convolution operation (Le Cun et al.]1990) with m filters (i.e.. kernels) P E Rdek is applied on Ew, and a set of feature maps could be obtained. Instead of. adopting max-pooling over time (Collobert et al.|2011), we adopt max-pooling over filters operation to capture local information of words as shown in Figure[1] Similar operation is adopted in (Shen. et al.[2014). That is we get the max feature value over results of m filters at the same window position, which depicts the most significant feature over the k characters. Thus, a vector qc for the. word which captures the character-level information is constructed..\nLet C be the vocabulary of characters, and E E Rdc[c| is the character embedding matrix, where. dc is the dimensionality of character embedding. Given a word, which is composed of a sequence of characters [c1, C2, ..., Cne], its corresponding character-level embedding matrix would be Ew E Rdcxnc. Herein,\nEW=E.Vi\nTri-gram Convolution Bi-gram Unigram Input: Q S1 S2 S3 S4 S5 Type 1 Type 2 X X Max Max Max Sv1 Sv2 Sv3 r2 r3 ro Output: Sentence Vector v\nFigure 2: Illustration of sentence representation. The input Q is from Figure [1] The weights S1, S2, S3, S4 and Sv1, Sv2, Sv3 are generated from attention mechanism, which is illustrated in Fig. ure|3(a)] Type 1 feature and type 2 feature are detailed in|3.2\nSoftmax() Final sente g(hp) Fully connected layer F(v) vector v' h1 S1 h2 S2 F(v) + V h3 ReLU V S3 : Sn hn Identity map\n(a) Attention mechanism. This is the basic at- (b) Residual block for refining sentence represen tention mechanism we used to assign weights tation v S1, S2, S3, S4 and Sv1, Sv2, Sv3. The hi is the cor- responding input vector.\nNote that embedding vector qc could only capture the word morphological features, while it can not reflect word semantic and syntactic characteristics. Therefore, we concatenate the distributed word representative vector qw (i.e., Word2Vec) (Mikolov et al.]2013) to qc as the word's final representation q E R(de +dw), where dw is the dimensionality of Word2Vec. Given a sentence, which consists of a sequence of words [w1, w2, .., wnw], its representation matrix is Q E R(de+dw)nw.\nTo overcome the lack of information issue for short text, we explore various kinds of useful in. formation from limited context. From higher level, we adopt two types of features as shown in\nFigure[2|(i.e., type 1 feature and type 2 feature). They capture different views of information for the. short text, which could be considered as results from horizontal view and vertical view on sentence representation matrix Q separately.\nType 1 feature takes word's feature space (i.e., horizontal view on Q) into consideration. Th. feature space is the composition of both character-level embedding and word semantic embedding Each word is a point in the feature space. We formulate the summation over all words appearing. in the sentence as the sentence's representation, inspired by (Zhang et al.| 2015). In fact, not al. the words in a sentence contribute the same for prediction. Therefore, we want to highlight th. significant words and this is realized by weighting the word's representation features. To assig. the weights, we employ attention mechanism, and multiply the weight to the word feature vecto. as Equation[2] Specifically, we follow Raffel & Ellis[(2015) and Bahdanau et al.(2014) as show in Figure 3(a)] For each word representation vector qi, we apply a Tanh function on the linea. transformation of qi as g(qi) = Tanh(Wqhqi + bqh), where Wth E R1(de+dw), bth E R. Then a softmax function on g(qi) is used to assign a weight s; for each qi, which indicates the significanc. of word i in the sentence.\nexp(g(qi)) Si qi = Sqi exp(g(qj)) nw\nType 2 feature models the word level features (i.e., vertical view on Q). As we know, sometime continuous words combination is meaningful and pertinent for sentence classification. To captur n-gram information, we apply convolution operation on Q, which is followed by a max-pooling ove time. We adopt several different kernel sizes to model various n-grams. Different n-grams contribute differently. The attention mechanism is utilized again on the vectors of n-gram representations, anc the resulting weights indicate their significance. We get the weighted feature vectors r1,r2,r3 Concatenating ro, r1, r2, r3, the complete sentence vector v is constructed."}, {"section_index": "6", "section_name": "3.3 RESIDUAL NETWORK FOR REFINING SENTENCE REPRESENTATION", "section_text": "The residual learning (He et al.]2015] 2016) is reported to outperform state-of-the-art models i1 image classification task and object detection task. This suggests residual learning could help t capture and refine the embedding. To make the features of sentence vector v from different view consistent, we employ residual learning to v.\nLet the desired mapping as H(v), instead of making each layer directly optimize H(v), residua learning (He et al.| 2015) turns to fit the residual function:\nF(v):=H(v) - v.\nThus, the original target mapping becomes\ny=F(v)+v.\nBoth residual function F(v) and the added input form v are flexible. In our model, we construct the building block by two fully connected layers connected by a ReLU (Nair & Hinton,2010) operatior as shown in Figure 3(b)] Meanwhile, the identity mapping is adopted by performing a shortcut connection and element-wise addition:.\nwhere v/ is the refined sentence vector, G is the weight matrix to be learned\nAfter getting the sentence embedding v/ from the building block, it is further delivered to a softmax classifier for text classification.\nAs a result, we can get a weighted sentence representation matrix Q E R(dc+dw)nw. Then we. employ an average over words in the sentence at the same feature position and obtain a sentence representation vector ro.\nv1=F(v,G)+v\nDataset Classes Train Samples Test Samples Average length of text Tweet 5 28,000 7,500 7 Question 5 2,000 700 25 AG_news 5 120,000 7,600 20"}, {"section_index": "7", "section_name": "4.1 DATASETS", "section_text": "We adopt testing datasets from different sources. There are three datasets, including Tweets, Ques tion, AG_news. All of them are relatively short.\nTweets are typical short text with only 140 characters limitation. We crawl the tweets from Twitter. with a set of keywords, which is specifically about some products. We label them as positive. negative, neutral, question and spam..\nQuestion dataset is a small dataset. The content is short questions, and the labels are questior types.\nAG_News dataset is from (Zhang et al.]2015). The reason we choose this is because the length of. text is much shorter than others. The news here only contains the title and description fields.\nIn this paper, we take 128 ASCII characters as character set, by which most of the testing documents are composite. We define word length n. as 20 and character embedding length d. as 100. If word with characters less than 20, zero padding is taken. If the length is larger than 20, just take the first 20 characters. We train the word distributed embedding using training data and the feature dimension is 300. We take sentence length as 20, which is enough to cover most of crucial words We add 5 residual blocks to refine the sentence vector.\nTable 2: Kernel size for convolutional layers\nWe select both traditional models and deep learning models on classification as baselines\nTF-SVM is the bag-of-word feature weighted by counting the term frequency in a sentence. Then deliver the feature matrix to a SVM classifier.\nTFIDF-SVM is taken as traditional baseline model. Since SVM classifier is robust and state-of. the-art traditional classifier, and TFIDF usually assign good weights for bag-of-words in documents even for tough inputs. So this is a competitive baseline model.\nTable 1: Statistics of datasets\nConvolutional layer Kernel size Conv:character embedding. (dc, 4) Conv:ngram dc+dw,1),(dc+dw,2),(dc+dw,3)\n(dc, 4) dc+ dw,1),(dc+ dw,2),(dc+ dw,3\nLg. Conv, Sm. Conv are proposed in (Zhang et al.]2015) which also consider character-leve embedding, and they concatenate all the characters' embeddings in a sentence in order as sentence's representation matrix. For fair comparison, we do not include thesaurus to help clear documents"}, {"section_index": "8", "section_name": "4.4 COMPARISON RESULTS", "section_text": "Table 3|shows the comparison results on the testing datasets. As we can see, the proposed model. could outperform baseline models on Tweets and Question datasets. For AG_news dataset, our. method could give comparable results as the best baseline model, TFIDF-SVM. The TFIDF-SVM model can achieve relatively better results than others. However, both Lg. Conv and Sm. Conv. do not perform well on Tweets dataset. This may because these two models are relatively deep. network with several down sampling operations (i.e., max-pooling) and this dramatically decreases. the short text representation. And short text does not contain much information. Thus Lg. Conv and. Sm. Conv could not give good results. The TF-SVM model also does not perform well on Tweets. dataset. This may because the tweet text is too short and term frequencies are mostly 1 which is not. enough to provide information on classification. Similar to result of CAR-1 on Tweets data. When. removing type 1 feature, the performance drops dramatically. However, for other datasets, in which. the document length is longer and the content is relatively formal, removing type 1 feature does not. influence the performance that much. Hence, these results suggest the word character-level feature and semantic feature (i.e., type 1 feature) are rather important for short, free-style text. On the other. hand, by adding type 2 features can also improve the performance according to results of CAR-. 2. Consequently, when dealing with short text, either formal or informal, including character-level. feature, word-semantic feature and n-gram feature would benefit the performance..\nAnother comparison is adding the residual network or not. As we can see from Table[3l residual net work could refine the vector representation. When removing residual block, performances on three datasets all decrease. In particular, the improvement for shorter and noisy text (Tweets dataset) is more than those relatively longer and formatted documents. Thus, for short noisy text classification problem, one adopts residual building block would improve the performance."}, {"section_index": "9", "section_name": "5 CONCLUSION", "section_text": "We propose a character-aware attention residual network for short text classification. We construc the sentence representation vector by two kinds of features. The first is focusing on feature space which include both character-level characteristics and semantic characteristics. The other is n-gran features. To make them consistent, the residual network helps refine the vector representation. Ex periment results suggest both extracted features and the residual network helps on short text clas\nTable 3: Comparison results on accuracy. \"CAR is the proposed model which indicates Character aware Attention Residual network. \"WAR\"' is the proposed model with only word embedding from Word2Vec. \"CA\" is \"CAR\" removing residual network. \"CAR-1\" is \"CAR\" removing type 1 feature. \"CAR-2\" is \"CAR\" removing type 2 feature. \"CAR-1w\" is \"CAR\"' removing attention weight assigned to type 1 feature.\nMethod Tweets Question AG_news BoW-SVM 40.34 86.35 88.77 TFIDF-SVM 79.96 88.85 90.89 Lg. Conv 24.13 83.55 87.18 Sm. Conv 40.51 88.57 84.35 WAR 72.50 89.97 89.18 CA 79.44 90.25 89.11 CAR-1 40.32 88.57 25.12 CAR-2 75.11 88.71 89.37 CAR-1w 78.74 90.39 90.19 CAR 81.26 90.95 90.45\nsification. Our proposed method could outperform the state-of-the-art traditional models and deep learning models."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointl learning to align and translate. CoRR, abs/1409.0473, 2014\nDzmitry Bahdanau, Jan Chorowski, Dmitriy Serdyuk, Philemon Brakel, and Yoshua Bengio. End to-end attention-based large vocabulary speech recognition. In IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP, pp. 4945-4949, 2016.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition. CoRR, abs/1512.03385, 2015.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residua networks. In Computer Vision - ECCV - 14th European Conference, pp. 630-645, 2016\nYoon Kim. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP, A meeting of SIG. DAT, a Special Interest Group of the ACL, pp. 1746-1751, 2014.\nXiang Zhang. Junbo Zhao. and Yann LeCun. Character-level convolutional networks for text clas sification. In Advances in Neural Information Processing Systems 28: Annual Conference or. Neural Information Processing Systems 2015, pp. 649-657, 2015.\nTim Rocktaschel, Edward Grefenstette, Karl Moritz Hermann, Tomas Kocisky, and Phil Blunsom Reasoning about entailment with neural attention. CoRR, abs/1509.06664, 2015"}] |
B1akgy9xx | [{"section_index": "0", "section_name": "MAKING STOCHASTIC NEURAL NETWORKS FROM DETERMINISTIC ONES", "section_text": "School of Electrical Engineering. Korea Advanced Institute of Science Technology, Republic of Korea"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Recently, deterministic deep neural networks (DNN) have demonstrated state-of-the-art perfor. mance on many supervised tasks, e.g., speech recognition (Hinton et al.|2012a) and object recog. nition (Krizhevsky et al. 2012). One of the main components underlying these successes is on the. efficient training methods for deeper and wider DNNs, which include backpropagation (Rumelhart et al. [1988), stochastic gradient descent (Robbins & Monro|1951), dropout/dropconnect (Hinton. et al. 2012b] Wan et al.]2013), batch/weight normalization (Ioffe & Szegedy2015] Salimans & Kingma2016), and various activation functions (Nair & Hinton2010] Gulcehre et al.2016). On the other hand, stochastic feedforward neural networks (SFNN) (Neal[1990) having random latent. units are often necessary in order to model complex stochastic natures in many real-world tasks, e.g.,. structured prediction (Tang & Salakhutdinov[2013), image generation (Goodfellow et al.[2014) and memory networks (Zaremba & Sutskever!2015). Furthermore, it has been believed that SFNN has several advantages beyond DNN (Raiko et al.|2014): it has more expressive power for multi-modal learning and regularizes better for large-scale learning..\nTraining large-scale SFNN is notoriously hard since backpropagation is not directly applicable. Cer-. tain stochastic neural networks using continuous random units are known to be trainable efficiently. using backpropagation under the variational techniques and the reparameterization tricks (Kingma. & Welling2013). On the other hand, training SFNN having discrete, i.e., binary or multi-modal. random units is more difficult since intractable probabilistic inference is involved requiring too many. random samples. There have been several efforts developing efficient training methods for SFNN. having binary random latent units (Neal]1990] Saul et al.]1996] Tang & Salakhutdinov2013]Ben- gio et al.2013fRaiko et al.2014] Gu et al. 2015) (see Section2.1|for more details). However,. training SFNN is still significantly slower than doing DNN of the same architecture, e.g., most prior.\n{kiminlee, jaehyungkim, jinwoos}@kaist.ac.kr, songchong@kaist.edu"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "It has been believed that stochastic feedforward neural networks (SFNN) have. several advantages beyond deterministic deep neural networks (DNN): they have more expressive power allowing multi-modal mappings and regularize better due. to their stochastic nature. However, training SFNN is notoriously harder. In this. paper, we aim at developing efficient training methods for large-scale SFNN, in. particular using known architectures and pre-trained parameters of DNN. To this. end, we propose a new intermediate stochastic model, called Simplified-SFNN,. which can be built upon any baseline DNN and approximates certain SFNN by. simplifying its upper latent units above stochastic ones. The main novelty of our. approach is in establishing the connection between three models, i.e., DNN -> Simplified-SFNN -> SFNN, which naturally leads to an efficient training pro-. cedure of the stochastic models utilizing pre-trained parameters of DNN. Us-. ing several popular DNNs, we show how they can be effectively transferred to. the corresponding stochastic models for both multi-modal and classification tasks. on MNIST, TFD, CIFAR-10, CIFAR-100 and SVHN datasets. In particular, our stochastic model built from the wide residual network has 28 layers and 36 million. parameters, where the former consistently outperforms the latter for the classifica-. tion tasks on CIFAR-10 and CIFAR-100 due to its stochastic regularizing effect..\nworks on this line have considered a small number (at most 5 or so) of layers in SFNN. We aim fo the same goal, but our direction is orthogonal to them.\nInstead of training SFNN directly, we study whether pre-trained parameters of DNN (or easier moc els) can be transferred to it, possibly with further fine-tuning of light cost. This approach can b. attractive since one can utilize recent advances in DNN on its design and training. For example. one can design the network structure of SFNN following known specialized ones of DNN and us. their pre-trained parameters. To this end, we first try transferring pre-trained parameters of DNI. using sigmoid activation functions to those of the corresponding SFNN directly. In our experiments. the heuristic reasonably works well. For multi-modal learning, SFNN under such a simple trans formation outperforms DNN. Even for the MNIST classification, the former performs similarly a. the latter (see Section 2|for more details). However, it is questionable whether a similar strateg. works in general, particularly for other unbounded activation functions like ReLU (Nair & Hintor 2010) since SFNN has binary, i.e., bounded, random latent units. Moreover, it lost the regularizatio. benefit of SFNN: it is rather believed that transferring parameters of stochastic models to DNN help. its regularization, but the opposite direction is unlikely possible..\nTo address the issues, we propose a special form of stochastic neural networks, named Simplified. SFNN, which intermediates between SFNN and DNN, having the following properties. First. Simplified-SFNN can be built upon any baseline DNN, possibly having unbounded activation func tions. The most significant part of our approach lies in providing rigorous network knowledge trans ferring (Chen et al.2015) between Simplified-SFNN and DNN. In particular, we prove that param eters of DNN can be transformed to those of the corresponding Simplified-SFNN while preserving the performance, i.e., both represent the same mapping and features. Second, Simplified-SFNN ap. proximates certain SFNN, better than DNN, by simplifying its upper latent units above stochastic ones using two different non-linear activation functions. Simplified-SFNN is much easier to traii than SFNN while utilizing its stochastic nature for regularization..\nThe above connection DNN -> Simplified-SFNN -> SFNN naturally suggests the following training procedure for both SFNN and Simplified-SFNN: train a baseline DNN first and then fine-tune its. corresponding Simplified-SFNN initialized by the transformed DNN parameters. The pre-training. stage accelerates the training task since DNN is faster to train than Simplified-SFNN. In addition,. one can also utilize known DNN training techniques such as dropout and batch normalization for fine-tuning Simplified-SFNN. In our experiments, we train SFNN and Simplified-SFNN under the. proposed strategy. They consistently outperform the corresponding DNN for both multi-modal and classification tasks, where the former and the latter are for measuring the model expressive power and the regularization effect, respectively. To the best of our knowledge, we are the first to confirm that SFNN indeed regularizes better than DNN. We also construct the stochastic models following. the same network structure of popular DNNs including Lenet-5 (LeCun et al.] 1998), NIN (Lin et al.2014) and WRN (Zagoruyko & Komodakis]2016). In particular, WRN (wide residual net-. work) of 28 layers and 36 million parameters has shown the state-of-art performances on CIFAR-10 and CIFAR-100 classification datasets, and our stochastic models built upon WRN outperform the deterministic WRN on the datasets..\nOrganization. In Section[2] we focus on DNNs having sigmoid and ReLU activation functions and. study simple transformations of their parameters to those of SFNN. In Section|3] we consider DNNs having general activation functions and describe more advanced transformations via introducing a new model, named Simplified-SFNN.\nStochastic feedforward neural network (SFNN) is a hybrid model, which has both stochastic binary and deterministic hidden units. We first introduce SFNN with one stochastic hidden layer (anc without deterministic hidden layers) for simplicity. Throughout this paper, we commonly denote the bias for unit i and the weight matrix of the l-th hidden layer by b, and We, respectively. Then the stochastic hidden layer in SFNN is defined as a binary random vector with N1 units, i.e., h1\nIn the above, x is the input vector and o (x) = 1/ (1 + e-x) is the sigmoid function. Our conditional distribution of the output y is defined as follows:\nP(yx)=Ep(h1|x) P(yh)=Ep(h1|x) [W (y|W2h'+b2, o?)]\nwhere N(.) denotes the normal distribution with mean W2h1 + b2 and (fixed) variance o?. There-. fore, P (y x) can express a very complex, multi-modal distribution since it is a mixture of expo. nentially many normal distributions. The multi-layer extension is straightforward via a combination of stochastic and deterministic hidden layers, e.g., see Tang & Salakhutdinov(2013), Raiko et al.. (2014). Furthermore, one can use any other output distributions as like DNN, e.g., softmax for. classification tasks.\nThere are two computational issues for training SFNN: computing expectations with respect to. stochastic units in forward pass and computing gradients in backward pass. One can notice that both. are computationally intractable since they require summations over exponentially many configura tions of all stochastic units. First, in order to handle the issue in forward pass, one can use the follow-.\ning Monte Carlo approximation for estimating the expectation: P (y | x) 1 P(y | h(m) M\nwhere h(m) ~ P (h1 | x) and M is the number of samples. This random estimator is unbiased anc has relatively low variance (Tang & Salakhutdinov! 2013) since its accuracy does not depend on the dimensionality of h' and one can draw samples from the exact distribution. Next, in order to handl. the issue in backward pass, Neal(1990) proposed a Gibbs sampling, but it is known that it ofter mixes poorly. Saul et al.[(1996) proposed a variational learning based on the mean-field approxi mation, but it has additional parameters making the variational lower bound looser. More recently several other techniques have been proposed including unbiased estimators of the variational bounc using importance sampling (Tang & Salakhutdinov2013) Raiko et al.]2014) and biased/unbiasec estimators of the gradient for approximating backpropagation (Bengio et al.[2013] Raiko et al. 2014; Gu et al.]2015).\nDespite the recent advances, training SFNN is still very slow compared to DNN due to the sampling procedures: in particular, it is notoriously hard to train SFNN when the network structure is deepe. and wider. In order to handle these issues, we consider the following approximation:.\nP(y|x)=Ep(hi|x)[N(y|W2h1+b2, o?) N(y|Ep(h1|x) [W2hl]+b2,o?) )=N(y|W2o(W'x+b)+b2, oy)\nNote that the above approximation corresponds to replacing stochastic units by deterministic ones such that their hidden activation values are same as marginal distributions of stochastic units, i.e., SFNN can be approximated by DNN using sigmoid activation functions, say sigmoid-DNN. When there exist more latent layers above the stochastic one, one has to apply similar approximations to all of them, i.e., exchanging the orders of expectations and non-linear functions, for making DNN and SFNN are equivalent. Therefore, instead of training SFNN directly, one can try transferring pre- trained parameters of sigmoid-DNN to those of the corresponding SFNN directly: train sigmoid- DNN instead of SFNN, and replace deterministic units by stochastic ones for the inference purpose Although such a strategy looks somewhat 'rude', it was often observed in the literature that it rea- sonably works well for SFNN (Raiko et al.2014) and we also evaluate it as reported in Table[1 We also note that similar approximations appear in the context of dropout: it trains a stochastic model averaging exponentially many DNNs sharing parameters, but also approximates a single DNN well.\nNow we investigate a similar transformation in the case when DNN uses the unbounded ReLL activation function, say ReLU-DNN. Many recent deep networks are of ReLU-DNN type due tc the gradient vanishing problem, and their pre-trained parameters are often available. Although i is straightforward to build SFNN from sigmoid-DNN, it is less clear in this case since ReLU is\nN1 P(h|x)=P(h|x), where P(h=1|x)=(W,x+b) i=1\n1.5 1.5 Training data Training data + Samples from sigmoid-DNN + Samples from SFNN (sigmoid activation) 1 1 0.5 0.5 0 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 X X (a) (b)\nTraining data Training data Samples from sigmoid-DNN + Samples from SFNN (sigmoid activation) 0.5 0.5 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 X X\nFigure 1: The generated samples from (a) sigmoid-DNN and (b) SFNN which uses same parameter trained by sigmoid-DNN. One can note that SFNN can model the multiple modes in outupt space ? around x = 0.4.\nTable 1: The performance of simple parameter transformations from DNN to SFNN on the MNIST and synthetic datasets, where each layer of neural networks contains 800 and 50 hidden units for two datasets, respectively. For all experiments, the only first hidden layer of DNN is replaced by stochastic one. We report negative log-likelihood (NLL) and classification error rates.\nunbounded. To handle this issue, we redefine the stochastic latent units of SFNN\nN1 P(h|x)=][P(h|x), where P(h=1|x)=min<af i=1\nIn the above, f(x) = max{x, 0} is the ReLU activation function and a is some hyper-parameter. A simple transformation can be defined similarly as the case of sigmoid-DNN via replacing determin. istic units by stochastic ones. However, to preserve the parameter information of ReLU-DNN, one has to choose such that a f (w/x + b1) < 1 and rescale upper parameters W2 as follows:\nW/x+b1 max w1,b1)(w1,b1),(w2,b2) i,x\nThen, applying similar approximations as in (2), i.e., exchanging the orders of expectations and non-linear functions, one can observe that ReLU-DNN and SFNN are equivalent..\nMNIST Classification Multi-modal Learning Inference Model Network Structure Training NLL Training Error (%) Test Error (%) Test NLL sigmoid-DNN 2 hidden layers 0 0 1.54 5.290 SFNN 2 hidden layers 0 0 1.56 1.564 sigmoid-DNN 3 hidden layers 0.002 0.03 1.84 4.880 SFNN 3 hidden layers 0.022 0.04 1.81 0.575 sigmoid-DNN 4 hidden layers 0 0.01 1.74 4.850 SFNN 4 hidden layers 0.003 0.03 1.73 0.392 2 hidden layers 0.005 0.04 1.49 7.492 ReLU-DNN SFNN 2 hidden layers 0.819 4.50 5.73 2.678 ReLU-DNN 3 hidden layers 0 0 1.43 7.526 SFNN 3 hidden layers 1.174 16.14 17.83 4.468 ReLU-DNN 4 hidden layers 0 0 1.49 7.572 SFNN 4 hidden layers 1.213 13.13 14.64 1.470\nWe evaluate the performance of the simple transformations from DNN to SFNN on the MNIST dataset (LeCun et al.][1998) and the synthetic dataset (Bishop|1994), where the former and the latter are popular datasets used for a classification task and a multi-moda1 (i.e., one-to-many mappings) prediction learning, respectively. In all experiments reported in this paper, we commonly use the softmax and Gaussian with standard deviation of oy = O.05 are used for the output probability on classification and regression tasks, respectively. The only first hidden layer of DNN is replaced by stochastic one, and we use 500 samples for estimating the expectations in the SFNN inference As reported in Table[1] we observe that the simple transformation often works well for both tasks: the SFNN and sigmoid-DNN inferences (using same parameters trained by sigmoid-DNN) perform similarly for the classification task and the former significantly outperforms for the latter for the\nmulti-modal task (also see Figure[1). It might suggest some possibilities that the expensive SFNN training might not be not necessary, depending on the targeted learning quality. However, in case of ReLU, SFNN performs much worse than ReLU-DNN for the MNIST classification task under the parameter transformation.\nIn this section, we propose an advanced method to utilize the pre-trained parameters of DNN fo. training SFNN. As shown in the previous section, simple parameter transformations from DNN tc. SFNN are not clear to work in general, in particular for activation functions other than sigmoid. Moreover, training DNN does not utilize the stochastic regularizing effect, which is an importan benefit of SFNN. To address the issues, we design an intermediate model, called Simplified-SFNN The proposed model is a special form of stochastic neural networks, which approximates certair. SFNN by simplifying its upper latent units above stochastic ones. Then, we establish more rigorous. connections between three models: DNN -> Simplified-SFNN -> SFNN, which leads to an effi cient training procedure of the stochastic models utilizing pre-trained parameters of DNN. In oui. experiments, we evaluate the strategy for various tasks and popular DNN architectures..\n3.1 SIMPLIFIED-SFNN OF TWO HIDDEN LAYERS AND NON-NEGATIVE ACTIVATION FUNCTIONS\nN1 P(h'|x)=IIP(h|x), where P(h=1|x)=min<a1f i=1\nh2(x) = [f (2(Ep(h|x) [s(W3h1 +bJ)]-s(O))) :Vj E N2]\nwhere a2 > 0 is a hyper-parameter for the second layer and s : R -> R is a differentiable functior with s\"(x) < 1 for all x E R, e.g., sigmoid and tanh functions. In our experiments, we use the sigmoid function for s(x). Here, one can note that the proposed model also has the same computa tional issues with SFNN in forward and backward passes due to the complex expectation. One car train Simplified-SFNN similarly as SFNN: we use Monte Carlo approximation for estimating the expectation and the (biased) estimator of the gradient for approximating backpropagation inspirec oyRaiko et al.(2014) (more detailed explanation is presented in Appendix[A)\nhl(x)= hx)=f W,hl-1(x)+1 :iE Nl\nwhere h(x) = x. As stated in the following theorem, we establish a rigorous way how to initialize parameters of Simplified-SFNN in order to transfer the knowledge stored in DNN.\nTheorem 1 Assume that both DNN and Simplified-SFNN with two hidden layers have same network structure with non-negative activation function f. Given parameters {wl, bl : l = 1, 2} of DNN and input dataset D, choose those of Simplified-SFNN as follows.\nY2/1 Q1.W s' (0) Y2 Y1Y2\nFor clarity of presentation, we first introduce Simplified-SFNN with two hidden layers and non- negative activation functions, where its extensions to multiple layers and general activation functions are presented in Appendix B We also remark that we primarily describe fully-connected Simplified- SFNNs, but their convolutional versions can also be naturally defined. In Simplified-SFNN of two hidden layers, we assume that the first and second hidden layers consist of stochastic binary hidden units and deterministic ones, respectively. As like (3), the first layer is defined as a binary random\nwhere x is the input vector, Q1 > 0 is a hyper-parameter for the first layer, and f : R -> R+ is some non-negative non-linear activation function with f'(x) < 1 for all x E R, e.g., ReLU and sigmoid activation functions. Now the second layer is defined as the following deterministic vector with N2 units, i.e., h2(x) E RN2\nWe are interested in transferring parameters of DNN to Simplified-SFNN to utilize the training. benefits of DNN since the former is much faster to train than the latter. To this end, we consider the following DNN of which l-th hidden layer is deterministic and defined as follows:.\n3.5 35 Baseline ReLU-DNN Layer 1 Layer 2 I# of samples = 1000 ReLU-DNN trained by ReLU-DNN 30 Input Output 3 ReLU-DNN trained by Simplified-SFNN 25 2.5 20 Layer 1 Layer 2 5 Tees 2 Input Output 10 1.5 5 J: Stochastic layer ->: Stochasticity J: Deterministic layer 50 150 1 2 34 510 0 100 200 250 50100 The value of ~2 Epoch (a) (b) (c)\nFigure 2: (a) Simplified-SFNN (top) and SFNN (bottom). (b) For first 200 epochs, we train a baseline ReLU-DNN. Then, we train simplified-SFNN initialized by the DNN parameters under transformation (8) with 2 = 50. We observe that training ReLU-DNN* directly does not reduce the test error even when network knowledge transferring still holds between the baseline ReLU- DNN and the corresponding ReLU-DNN*. (c) As the value of 2 increases, knowledge transferring loss measured as |b| N |h? (x) - h! (x)| is decreasing. X\nW hx)- Vj,x E D 2s' Y2\nGiven a Simplified-SFNN model, the corresponding SFNN can be naturally defined by taking out the. expectation in (6). As illustrated in Figure2(a)] the main difference between SFNN and Simplified SFNN is that the randomness of the stochastic layer propagates only to its upper layer in the latter. i.e., the randomness of h' is averaged out at its upper units h? and does not propagate to h3 or output. y. Hence, Simplified-SFNN is no longer a Bayesian network. This makes training Simplified-SFNN. much easier than SFNN since random samples are not required at some layers'|and consequently the quality of gradient estimations can also be improved, in particular for unbounded activatior functions. Furthermore, one can use the same approximation procedure (2) to see that Simplified. SFNN approximates SFNN. However, since Simplified-SFNN still maintains binary random units it uses approximation steps later, in comparison with DNN. In summary, Simplified-SFNN is an. intermediate model between DNN and SFNN, i.e., DNN -> Simplified-SFNN -> SFNN.\n1 For example, if one replaces the first feature maps in the fifth residual unit of Pre-ResNet having 164. layers (He et al.||2016) by stochastic ones, then the corresponding DNN, Simplified-SFNN and SFNN took 1 mins 35 secs, 2 mins 52 secs and 16 mins 26 secs per each training epoch, respectively, on our machine with. one Intel CPU (Core i7-5820K 6-Core@3.3GHz) and one NVIDIA GPU (GTX Titan X, 3072 CUDA cores) Here, we trained both stochastic models using the biased estimator (Raiko et al.l[2014) with 10 random sample. on CIFAR-10 dataset.\nThe proof of the above theorem is presented in AppendixD.1. Our proof is built upon the first-order Taylor expansion of non-linear function s(x). Theorem[1|implies that one can make Simplified-SFNN represent the function values of DNN with bounded errors using a linear trans- formation. Furthermore, the errors can be made arbitrarily small by choosing large 72, i.e.,.\nThe above connection naturally suggests the following training procedure for both SFNN and. Simplified-SFNN: train a baseline DNN first and then fine-tune its corresponding Simplified-SFNN initialized by the transformed DNN parameters. Finally, the fine-tuned parameters can be used for SFNN as well. We evaluate the strategy for the MNIST classification, which is reported in Table2 (see Appendix C|for more detailed experiment setups). We found that SFNN under the two-stage. training always performs better than SFNN under a simple transformation (4) from ReLU-DNN.\nTable 2: Classification test error rates [%] on MNIST, where each layer of neural networks contains 800 hidden units. All Simplified-SFNNs are constructed by replacing the first hidden layer of a base. line DNN with stochastic hidden layer. We also consider training DNN and fine-tuning Simplified SFNN using batch normalization (BN) and dropout (DO). The performance improvements beyond baseline DNN (due to fine-tuning DNN parameters under Simplified-SFNN) are calculated in the. bracket.\nMore interestingly, Simplified-SFNN consistently outperforms its baseline DNN due to the stochas- tic regularizing effect, even when we train both models using dropout (Hinton et al.|2012b) and batch normalization (Ioffe & Szegedy|2015). In order to confirm the regularization effects, one can again approximate a trained Simplified-SFNN by a new deterministic DNN which we call DNN* and is different from its baseline DNN under the following approximation at upper latent units above binary random units:\nEp(he|x) [s(Wf+1h)] =s(Ep(he|x) [W] Wl+1p(h=1|x )\n3.3 EXPERIMENTAL RESULTS ON MULTI-MODAL LEARNING AND CONVOLUTIONAL NETWORKS\nWe present several experimental results for both multi-modal and classification tasks on MNIST. LeCun et al.[|1998), Toronto Face Database (TFD) (Susskind et al.]2010), CIFAR-10, CIFAR-100 (Krizhevsky & Hinton]2009) and SVHN (Netzer et al.[2011). Here, we present some key results. due to the space constraints and more detailed explanations for our experiment setups are presented. in AppendixC\nWe first verify that it is possible to learn one-to-many mapping via Simplified-SFNN on the TFD and MNIST datasets, where the former and the latter are used to predict multiple facial expressions from the mean of face images per individual and the lower half of the MNIST digit given the uppe half, respectively. We remark that both tasks are commonly performed in recent other works tc test the multi-modal learning using SFNN (Raiko et al.|2014) Gu et al.]2015). In all experiments we first train a baseline DNN, and the trained parameters of DNN are used for further fine-tuning those of Simplified-SFNN. As shown in Table 3|and Figure [3] stochastic models outperform thei baseline DNN, and generate different digits for the case of ambiguous inputs (while DNN does not). We also evaluate the regularization effect of Simplified-SFNN for the classification tasks or CIFAR-10, CIFAR-100 and SVHN. Table4|reports the classification error rates using convolutiona neural networks such as Lenet-5 (LeCun et al.]1998), NIN (Lin et al.]2014) and WRN (Zagoruykc & Komodakis 2016). Due to the regularization effects, Simplified-SFNNs consistently outperforn\nInference Model Training Model Network Structure without BN & DO with BN with DO sigmoid-DNN sigmoid-DNN 2 hidden layers 1.54 1.57 1.25 SFNN sigmoid-DNN 2 hidden layers 1.56 2.23 1.27 Simplified-SFNN fine-tuned by Simplified-SFNN 2 hidden layers 1.51 1.5 1.11 sigmoid-DNN* fine-tuned by Simplified-SFNN 2 hidden layers 1.48 (0.06) 1.48 (0.09) 1.14 (0.11) SFNN fine-tuned by Simplified-SFNN 2 hidden layers 1.51 1.57 1.11 ReLU-DNN ReLU-DNN 2 hidden layers 1.49 1.25 1.12 SFNN ReLU-DNN 2 hidden layers 5.73 3.47 1.74 Simplified-SFNN fine-tuned by Simplified-SFNN 2 hidden layers 1.41 1.17 1.06 ReLU-DNN* fine-tuned by Simplified-SFNN 2 hidden layers 1.32 (0.17) 1.16 (0.09) 1.05 (0.07) SFNN fine-tuned by Simplified-SFNN 2 hidden layers 2.63 1.34 1.51 ReLU-DNN ReLU-DNN 3 hidden layers 1.43 1.34 1.24 SFNN ReLU-DNN 3 hidden layers 17.83 4.15 1.49 Simplified-SFNN fine-tuned by Simplified-SFNN 3 hidden layers 1.28 1.25 1.04 ReLU-DNN* fine-tuned by Simplified-SFNN 3 hidden layers 1.27 (0.16) 1.24 (0.1) 1.03 (0.21) SFNN fine-tuned by Simplified-SFNN 3 hidden layers 1.56 1.82 1.16 ReLU-DNN ReLU-DNN 4 hidden layers 1.49 1.34 1.30 SFNN ReLU-DNN 4 hidden layers 14.64 3.85 2.17 Simplified-SFNN fine-tuned by Simplified-SFNN 4 hidden layers 1.32 1.32 1.25 ReLU-DNN* fine-tuned by Simplified-SFNN 4 hidden layers 1.29 (0.2) 1.29 (0.05) 1.25 (0.05) SFNN fine-tuned by Simplified-SFNN 4 hidden layers 3.44 1.89 1.56\nWe found that DNN* using fined-tuned parameters of Simplified-SFNN also outperforms the base line DNN as shown in Table2|and Figure|2(b)\nMNIST-half TFD Inference Model Training Model 2 hidden layers. 3 hidden layers 2 hidden layers 3 hidden layers. sigmoid-DNN sigmoid-DNN 1.409 1.720 -0.064 0.005 SFNN sigmoid-DNN 0.644 1.076 -0.461 -0.401 Simplified-SFNN fine-tuned by Simplified-SFNN 1.474 1.757 -0.071 -0.028 SFNN fine-tuned by Simplified-SFNN 0.619 0.991 -0.509 -0.423 ReLU-DNN ReLU-DNN 1.747 1.741 1.271 1.232 SFNN ReLU-DNN -1.019 -1.021 0.823 1.121 Simplified-SFNN fine-tuned by Simplified-SFNN 2.122 2.226 0.175 0.343 SFNN fine-tuned by Simplified-SFNN -1.290 -1.061 -0.380 -0.193\nTable 3: Test negative log-likelihood (NLL) on MNIST-half and TFD datasets, where each layer of neural networks contains 200 hidden units. All Simplified-SFNNs are constructed by replacing the first hidden layer of a baseline DNN with stochastic hidden layer.\nFigure 3: Generated samples for predicting the lower half of the MNIST digit given the upper half. The original digits and the corresponding inputs (first). The generated samples from sigmoid-DNN. (second), SFNN under the simple transformation (third), and SFNN fine-tuned by Simplified-SFNN (forth). We observed that SFNN fine-tuned by Simplified-SFNN can generate more various samples. from same inputs, e.g., 3 and 8, better than SFNN under the simple transformation..\nInference Training Model CIFAR-10 CIFAR-100 SVHN model Lenet-5 Lenet-5 37.67 77.26 11.18 Lenet-5* Simplified-SFNN 33.58 73.02 9.88 NIN NIN 9.51 32.66 3.21 NIN* Simplified-SFNN 9.33 30.81 3.01 WRN WRN 4.22 (4.39)t 20.30 (20.04) 3.25 Simplified-SFNN WRN* 4.21 19.98 3.09+ (one stochastic layer) Simplified-SFNN WRN* 4.14 19.72 3.06 (two stochastic layers)\nTable 4: Test error rates [%] on CIFAR-10, CIFAR-100 anc SVHN. The error rates for WRN are from our experiments where original ones reported in (Zagoruyko & Komodakis. 2016) are in the brackets. Results with + are obtained using. the horizontal flipping and random cropping augmentation\ntheir baseline DNNs. For example, WRN* outperforms WRN by O.08% on CIFAR-10 and 0.58% on CIFAR-100. We expect that if one introduces more stochastic layers, the error would be decreased more (see Figure4), but it increases the fine-tuning time-complexity of Simplified-SFNN..\nIn order to develop an efficient training method for large-scale SFNN, this paper proposes a nev ntermediate stochastic model, called Simplified-SFNN. We establish the connection between thre models, i.e., DNN -> Simplified-SFNN -> SFNN, which naturally leads to an efficient trainin orocedure of the stochastic models utilizing pre-trained parameters of DNN. This connection natu ally leads an efficient training procedure of the stochastic models utilizing pre-trained parameter and architectures of DNN. We believe that our work brings a new important direction for trainin stochastic neural networks, which should be of broader interest in many related applications.\n22 WRN trained by Simplified-SFNN (one stochastic layer) WRN trained by Simplified-SFNN (two stochastic layers) 21.5 Bseline WRN [%] d0u3 ] 21 20.5 20 19.5 0 50 100 150 200 Epoch\nFigure 4: Test errors of WRN* per each training epoch on CIFAR-100"}, {"section_index": "3", "section_name": "REFERENCES", "section_text": "Christopher M Bishop. Mixture density networks. 1994\nTianqi Chen, Ian Goodfellow, and Jonathon Shlens. Net2net: Accelerating learning via knowledge transfer arXiv preprint arXiv:1511.05641. 2015\nShixiang Gu, Sergey Levine, Ilya Sutskever, and Andriy Mnih. Muprop: Unbiased backpropagation for stocha tic neural networks. arXiv preprint arXiv:1511.05176, 2015.\nCaglar Gulcehre, Marcin Moczulski, Misha Denil, and Yoshua Bengio. Noisy activation functions. arXi preprint arXiv:1603.00391, 2016.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027, 2016.\nGeoffrey E Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Se nior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 2012a.\nDiederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013\nAlex Krizhevsky and Geoffrey E Hinton. Learning multiple layers of features from tiny images. Master' thesis, Department of Computer Science, University of Toronto, 2009..\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neura networks. In Advances in Neural Information Processing Svstems (NIPs). 2012\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to documen recognition. Proceedings of the IEEE, 1998.\nMin Lin, Qiang Chen, and Shuicheng Yan. Network in network. International Conference on Learning Repre sentations (ICLR), 2014.\nVinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In Interng tional Conference on Machine Learning (ICML), 2010.\nRadford M Neal. Learning stochastic feedforward networks. Department of Computer Science, University o Toronto, 1990.\nYuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in. natural images with unsupervised feature learning. NIPs Workshop on Deep Learning and Unsupervisec Feature Learning, 2011.\nTapani Raiko, Mathias Berglund, Guillaume Alain, and Laurent Dinh. Techniques for learning binary stochasti feedforward neural networks. arXiv preprint arXiv:1406.2989, 2014.\nDavid E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning internal representations by error propagation. Technical report, MIT Press, 1988.\nYoshua Bengio, Nicholas Leonard, and Aaron Courville. Estimating or propagating gradients through stochas. tic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. International Conference on Machine Learning (ICML), 2015.\nTim Salimans and Diederik P Kingma. Weight normalization: A simple reparameterization to accelerate train ing of deep neural networks. arXiv preprint arXiv:1602.07868, 2016.\nawrence K Saul, Tommi Jaakkola, and Michael I Jordan. Mean field theory for sigmoid belief networks Journal of artificial intelligence research. 1996.\nYichuan Tang and Ruslan R Salakhutdinov. Learning stochastic feedforward neural networks. In Advances in Neural Information Processing Systems (NIPs). 2013\nSergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016\nWojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. arXiv preprint arXiv:1505.00521, 2015."}, {"section_index": "4", "section_name": "TRAINING SIMPLIFIED-SFNN", "section_text": "The parameters of Simplified-SFNN can be learned using a variant of the backpropagation algorithm. (Rumelhart et al.] [1988) in a similar manner to DNN. However, in contrast to DNN, there are two computational issues for simplified-SFNN: computing expectations with respect to stochastic units. in forward pass and computing gradients in back pass. One can notice that both are intractable since they require summations over all possible configurations of all stochastic units. First, in order to handle the issue in forward pass, we use the following Monte Carlo approximation for estimating. the expectation:\nM Ep(h1|x) [s(W?hl + b?)] = W?h(m) + h(m) ~ P(hl|x ) M n=\nwhere h(m) ~ P (h1 | x) and M is the number of samples. In our experiments, we commonly choose M = 20.\nIn this section, we describe how the network knowledge transferring between Simplified-SFNN and DNN, i.e., Theorem[1] generalizes to multiple layers and general activation functions.\nA deeper Simplified-SFNN with L hidden layers can be defined similarly as the case of L = 2. We also establish network knowledge transferring between Simplified-SFNN and DNN with L hidder layers as stated in the following theorem. Here, we assume that stochastic layers are not consecutive for simpler presentation, but the theorem is generalizable for consecutive stochastic layers\nJosh M Susskind, Adam K Anderson, and Geoffrey E Hinton. The toronto face database. Department o Computer Science. University of Toronto. Toronto. ON. Canada. Tech. Rep. 2010.\nwhere M is the number of samples. This random estimator is unbiased and has relatively low variance (Tang & Salakhutdinov2013) since its accuracy does not depend on the dimensionality of h1 and one can draw samples from the exact distribution. Next, in order to handle the issue in back pass, we use the following approximation inspired by (Raiko et al.]2014):\nEp(h|x)[s(W?h1 + b)] )W m W? ow?EP(h1|x)[s(W3h'+b)] h=1x M m\nTheorem 2 Assume that both DNN and Simplified-SFNN with L hidden layers have same network structure with non-negative activation function f. Given parameters {wl, bl : l = 1,..., L} oJ. DNN and input dataset D, choose the same ones for Simplified-SFNN initially and modify them for. each l-th stochastic layer and its upper layer as follows:.\nWl+1 bl+1 YeYe+1 Qe+1, Wl+1 bl+ s'(0) Yl+1 YeYl+1\nlim hx)-hx)=0, Vj,x E I Yl+1->00 V stochastic hidden layer l\nThe above theorem again implies that it is possible to transfer knowledge from DNN to Simplified SFNN by choosing large yt+1. The proof of Theorem2|is similar to that of Theorem|1and given in AppendixD.2\nN P(h|x)=][P(h|x) with P(h=1|x)=min<aef i=1\nUnder this general Simplified-SFNN model, we also show that transferring network knowledge from. DNN to Simplified-SFNN is possible as stated in the following theorem. Here, we again assume that stochastic layers are not consecutive for simpler presentation..\nTheorem 3 Assume that both DNN and Simplified-SFNN with L hidden layers have same network structure with non-linear activation function f. Given parameters { wl, bl : l = 1, ..., L} of DNN and input dataset D, choose the same ones for Simplified-SFNN initially and modify them for each l-th stochastic layer and its upper layer as follows..\nWl+1 be+1 1 2yeYl+1 Qe+1, Wl+1 Qe 2yl s'(0) Yl+1 2YeYl+1\nlim hx)-hx)=0, Vj,x E D Yl+1->00 V stochastic hidden layer l\nWe omit the proof of the above theorem since it is somewhat direct adaptation of that of Theorem\nIn this section. we describe detailed explanation about all the experiments described in Section |3 In all experiments, the softmax and Gaussian with the standard deviation of O.05 are used as the. output probability for the classification task and the multi-modal prediction, respectively. The loss. was minimized using ADAM learning rule (Kingma & Ba]2014) with a mini-batch size of 128. We. used an exponentially decaying learning rate.\nIn this section, we describe an extended version of Simplified-SFNN which can utilize any activation function. To this end, we modify the definitions of stochastic layers and their upper layers by. introducing certain additional terms. If the l-th hidden layer is stochastic, then we slightly modify the original definition (5) as follows:\n(0 Ql+1 Ep(he|x)[s(Wl+1hl+bl+1 Wl+1 2"}, {"section_index": "5", "section_name": "CLASSIFICATION ON MNIST", "section_text": "The MNIST dataset consists of 28 28 pixel greyscale images, each containing a digit O to 9 with. 60,000 training and 10,000 test images. For this experiment, we do not use any data augmentation or pre-processing. Hyper-parameters are tuned on the validation set consisting of the last 10,000 training images. All Simplified-SFNNs are constructed by replacing the first hidden layer of a baseline DNN with stochastic hidden layer. As described in Section|3.2] we train Simplified-SFNNs under the two-stage procedure: first train a baseline DNN for first 200 epochs, and the trained parameters of DNN are used for initializing those of Simplified-SFNN. For 50 epochs, we train. simplified-SFNN. We choose the hyper-parameter /2 = 50 in the parameter transformation. All Simplified-SFNNs are trained with M = 20 samples at each epoch, and in the test, we use 500. samples.\nThe Toronto Face Database (TFD) (Susskind et al.|2010) dataset consists of 48 48 pixel greyscale images, each containing a face image of 90o individuals with 7 different expressions. Similar to (Raiko et al.|2014), we use 124 individuals with at least 10 facial expressions as data. We randomly choose 100 individuals with 1403 images for training and the remaining 24 individuals with 326 images for the test. We take the mean of face images per individual as the input and set the output as the different expressions of the same individual. The MNIST dataset consists of 28 28 pixel greyscale images, each containing a digit 0 to 9 with 60,000 training and 10,000 test images. For this experiments, each pixel of every digit images is binarized using its grey-scale value. We take the upper half of the MNIST digit as the input and set the output as the lower half of it. All Simplified- SFNNs are constructed by replacing the first hidden layer of a baseline DNN with stochastic hidden layer. We train Simplified-SFNNs with M = 20 samples at each epoch, and in the test, we use 500 samples. We use 200 hidden units for each layer of neural networks in two experiments. Learning rate is chosen from {0.005 , 0.002, 0.001, ... , 0.0001} , and the best result is reported for both tasks.\nThe CIFAR-10 and CIFAR-100 datasets consist of 50,000 training and 10,000 test images. Th SVHN dataset consists of 73,257 training and 26,032 test images|2|we pre-process the data using. global contrast normalization and ZCA whitening. For these datasets, we design a convolutiona version of Simplified-SFNN. In a similar manner to the case of fully-connected networks, one cai define a stochastic convolution layer, which considers the input feature map as a binary random ma trix and generates the output feature map as defined in (6). All Simplified-SFNNs are constructed b replacing a hidden feature map of a baseline models, i.e., Lenet-5, NIN and WRN, with stochasti one as shown in Figure[5(d)] We use WRN with 16 and 28 layers for SVHN and CIFAR datasets, re spectively, since they showed state-of-the-art performance as reported byZagoruyko & Komodaki (2016). In case of WRN, we introduce up to two stochastic convolution layers.For 100 epochs, we first train baseline models, i.e., Lenet-5, NIN and WRN, and trained parameters are used for ini tializing those of Simplified-SFNNs. All Simplified-SFNNs are trained with M = 5 samples an the test error is only measured by the approximation (9. The test errors of baseline models ar. measured after training them for 200 epochs similar to|Zagoruyko & Komodakis|(2016)"}, {"section_index": "6", "section_name": "D PROOFS OF THEOREMS", "section_text": "h =1|x, Wl,b)=min<a1f *f(W{x+b1),Vi,xE D\n2we do not use the extra SVHN dataset for training\n6 6 16 16 120 84 Input Output feature maps (f. maps)Stochastic (Stoc.) f. maps. f. maps f. maps units units 00 0 [Convolution (Conv.)] [Max pool] [Stochastic (Stoc.) Conv.] [Max pool] [Fully-connected] [Fully-connected] [Fully-connected] (a) 192 160 96 96 192 192 192 192 192 192 10 Input Output f. maps f. maps f.maps f. maps f. maps f.maps f.maps f. maps f. maps Stoc. f. maps f. maps . [Conv.] [Conv.][Conv.][Max pool] [Conv.] [Conv.][Conv.][Avg pool] [Conv.] [Conv.] [Stoc. Conv.] [Avg pool] (b) 64 * 2u-1, f. maps x 3 (u = 1,2,3) [Conv.] 64 * 2u-1 64 * 2u-1 64 * 2u-1 f. maps Input 16 if (v' 2 & if (v' 3 & 64 * 2u-1 256 Output [Stoc. Conv.] [Stoc. Conv.] f. maps f. maps u = 3) f. maps u = 3) [Conv.] [Conv.] [Conv.] Stoc. f. maps [Avg pool] [Fully- 0 Stoc. f. maps connected] [Conv.] else else [Conv.] (c) 160 * 2u-1, f. maps 3 (u = 1,2,3) x 3 (v = 1,2,3) [Conv.] 160 * 2u-1 16 160 * 2u-1 160 * 2u-1 Input if (v v' 160 * 2u-1 640 Output f. maps f. maps f. maps &u = 3) [Stoc. Conv.] f. maps f. maps [Conv.] [Conv.] [Conv.] [Conv.] Stoc. f. maps Avg pool [Fully- 0 connected] else [Conv.] (d)\nFigure 5: The overall structures of (a) Lenet-5, (b) NIN, (c) WRN with 16 layers, and (d) WRN with 28 layers. The red feature maps correspond to the stochastic ones. In case of WRN, we introduce. one (v' = 3) and two (v' = 2) stochastic feature maps.\nEp(h|x)[s(;(h))]=(s(0)+s'(0);(h)+R(;(h)))P(h|x h1 = s (0) + s'(0) `W?P(h=1|x)+b? +Ep(h1|x)[R(;(h))]\nx)+b +Ep(hi|x) [R(j(h)\nSince we assume that f' (x) < 1. the following inequality holds\nQ2Ep(h|x)[R(;(h))] C2Ep(h1|x)|(W3h1+bJ)2\nW x;W2b2-h x;W )\nFor the proof of Theorem[2] we first state the two key lemmas on error propagation in Simplified SFNN.\nLemma 4 Assume that there exists some positive constant B such that.\nx) < B, Vi,x E D\nhx)-h h(x) < BNl-1wl Vj,x E D max\nhl-1 (x -1(x) < B, Vi,xE D\n1 Wl+1 bl+1 YeYl+1 Ql Qe+1, Wl+1. s'(0) Yl Ye+1 Ye Yl+1\nl+1 max maxI < BNl-1NlWe..Wl+1 maz max 2s' 0)Ye+1 where max and W.l max max max j ij\nAssume that l-th layer is first stochastic hidden layer in Simplified-SFNN. Then, from Theorem|1 we have\nrl+1 max max +Ix)-h Vj,x E D 2s'(0)Ye+1\nAccording to Lemma4|and [5] the final error generated by the right hand side of (11) is bounded by\nlWl+1 + max max Ye hf(x)-h(x) j,x E D. 2s' (0) Ye+1 l:stochastic hidden layer\nlim h(x)-h(x) Vj,x E D Yl+1->00 V stochastic hidden layer l\nWl. Wl.hl-1(x)+ E:\nWeh. (x) + Wl < B L Wh BNl-1Wl C na 2 2\nt-1 x )=hl- x 1 Vi.x\nP(h,=1|x,Wl,bl) W,he-1(x)+Wh,el-1.\nP(h, =1|x,Wl,bl) hb (x) + .X Yl\nrl+1 TlYl max max 2s' (O) Ye+1\nhe-1x=h-1x)+i, Vi,x.\nP(h=1|x,Wl,b) =minaef Whl-1(x)+b W hl-1(x) Vj,x\n(x: Wl+1 bl+ 0 Wl+1 j+Qe+1Ep(he|x)R(k Ql+1 l+1hl + bl+ jk Yl\nWl+1 hl+1 hf+1(x)-hf+1(x)BNl-1N'WmaxWh+1+ max max max 2s'(0)Ye+1\nlext, consider the upper hidden layer of stochastic layer. From Taylor's theorem, there exists a 2! consider a binary random vector, i.e., he E {0, 1} N', one can write Ep(he|x)[s(Bk(h'))] =(s(0) +s'(0)k(h)+ R(k(h))) P(h|x) h 1x 'R(k(h) x) where (h) = wf+1he + bl+1 is the incoming signal. From (14) and above equation, for every idden unit k, we have '(x; Wl+1, bl+1 F Ep(h R(k(hl Since we assume that |f'(x)| < 1, the following inequality holds: k Xl+ Ql+1 l+1 Vl+1hl (15) j k Yl 2 vhere we use [s\"(z) < 1 for the last inequality. Therefore, it follows that. (NeWl+1+ bl+ hf+1 (x)- hl+1 (x) BNl-1N'WmaxWl+1 + max 2s'(0)Ye+1 Wl+1 since we set (ae+1, Wl+1, bl+1) Yl+1Ye This completes the proof of s'(0) Ye+1 Yl+1 Lemma5\nEp(he|x)[s(x(h))]=(s(O)+s'(O)k(h)+R(k(h))) P(h|x he Wf+1P(h} =1|x) + bf+1 = s(0) + s'(0) >`R(k(h))P(h|x)\nwf+1hj(x) +n +Ep(h|x) [R(Bz(h)) Ql+1 (0"}] |
rJM69B5xx | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Machine intelligence has had some notable successes, however often in narrow domains which are sometimes of little practical use to humans - for instance games like chess (Campbell et al.]2002 or Go (Silver et al.|2016). If we aimed to build a general AI that would be able to efficiently assist humans in a wide range of settings, we would want it to have a much larger set of skills - among them would be an ability to understand human language, to perform common-sense reasoning and to be able to generalize its abilities to new situations like humans do.\nIf we want to achieve this goal through Machine Learning, we need data to learn from. A lot of data if the task at hand is complex - which is the case for many useful tasks. One way to achieve wide applicability would be to provide training data for each specific task we would like the machine to perform. However it is unrealistic to obtain a sufficient amount of training data for some domains - it may for instance require expensive human annotation or all domains of application may be difficult to predict in advance - while the amount of training data in other domains is practically unlimited (e.g. in language modelling or Cloze-style question answering).\nThe way to bridge this gap -- and to achieve the aforementioned adaptability -- is transfer learning (Par. & Yang2010) and closely related semi-supervised learning (Zhu & Goldberg 2009) which allov the system to acquire a set of skills on domains where data are abundant and then use these skills tc. succeed on previously unseen domains. Despite how important generalization is for general AI, a lo of research keeps focusing on solving narrow tasks..\nIn this paper we would like to examine transfer of learnt skills and knowledge within the domain of text comprehension, a field that has lately attracted a lot of attention within the NLP community (Hermann et a1 1ill et a 2015 Kobayashi et al. 2016Kadlec et al 2016b Chen et al. 2016Sordon1 et al. 2016 Dhingra et al 2016}Trischler et al. 2016;Weissenborn 2016 u1 et al. 2016ba\n*These authors contributed equally to this work"}, {"section_index": "1", "section_name": "2.1 PRE-TRAINING DATASETS", "section_text": "We have mentioned that for the model pre-training we would want to use a task where training data are abundant. An example of such task is context-dependent cloze-style-question answering since the training data for this task can be generated automatically from a suitable corpus. We will use two such pre-training datasets in our experiments: the BookTest (Bajgar et al.2016) and the CNN/Daily Mail (CNN/DM) news dataset (Hermann et al.2015)."}, {"section_index": "2", "section_name": "2.1.2 CNN/DAILY MAIL", "section_text": "In the CNN/DM dataset the context document is formed from a news article while the cloze-style question is formed by removing a named entity from one of the short summary sentences which often. appear at the top of the article.\nTo stop the model from using world knowledge from outside the context article (and hence truly test the comprehension of the article), all named entities were replaced by anonymous tags, which are further shuffled for each example. This may make the comprehension more difficult; however, since the answer is always one of the anonymized entities, it also reduces the number of possible answers making guessing easier.\n1. Whether we could train models on natural-language tasks where data are abundant anc. transfer the learnt skills to tasks where in-domain training data may be difficult to obtair. We will first look into what reasoning abilities a model learns from two large-scale reading comprehension datasets using artificial tasks, and then check whether it can transfer its skill. to real world tasks. Spoiler: both these transfers are very poor if we allow no training at al on the target task. 2. Whether pre-training on large-scale datasets does help if we allow the model to train on small sample of examples from the target tasks. Here the results are much more positive.. 3. Finally we examine whether the benefits of pre-training are concentrated in any particula part of the model - namely the word-embedding part or the context encoder (the reasoning. part). It turns out that pre-training is useful for both components..\nAlthough our results do not improve current state of the art in any of the studied tasks, they show a. clear positive effect of large-dataset pre-training on the performance of our baseline machine-learning. model. Previous studies of transfer learning and semi-supervised learning in NLP focused on text classification (Dai & Le]2015) [Mou et al.]2016) and various parsing tasks (Collobert et al.]2011 Hashimoto et al.2016). To our knowledge this work is the first study of transfer learning in reading. comprehension, and we hope it will stimulate further work in this important area..\nWe will first briefly introduce the datasets we will be using on the pre-training and target sides then our baseline model and afterwards in turn describe the method and results of each of the three experiments.\nThe task associated with both datasets is to answer a cloze-style question (i.e. fill in a blank in a sentence) the answer to which needs to be inferred from a context document provided with the question.\nIn the BookTest dataset, the context document is formed from 20 consecutive sentences from a book The question is then formed by omitting a common noun or a named entity from the subsequent 21st. sentence. Among datasets of this kind, the BookTest is among the largest with more than 14 million. training examples coming from 3555 copyright-free books avalable thanks to Project Gutenberg.\nThe first target dataset are the bAbI tasks (Weston et al.|. 2016) - a set of artificial tasks each of. which is designed to test a specific kind of reasoning. This toy dataset will allow us to observe what particular skills the model may be learning from each of the three training datasets..\nFor our experiments we will be using an architecture designed to select one word from the context document as the answer. Hence we have selected Tasks 1.2.3.4.5.11.12.13.14 and 16 which fulfill this requirement and added task 15 which required a slight modification. Furthermore because both pre-training datasets are cloze-style we converted also the bAbI task questions into cloze style (e.g 'Where is John?\" to 'John is in the XXXXX').\nFull details about these alterations can be found in Appendix|A"}, {"section_index": "3", "section_name": "2.2.2 SQUAD", "section_text": "The SQuAD dataset is a great target dataset to use for this. As opposed to the bAbI tasks, the goal of this dataset is actually a problem whose solving would be useful to humans - answering natural questions based on an natural language encyclopedic knowledge base..\nA high level structure of the[AS Reader|is shown in Figure[1] The words from the document and the. question are first converted into vector embeddings using a look-up matrix. The document is then. read by a bidirectional Gated Recurrent Unit (GRU) network (Cho et al.|2014). A concatenation. of the hidden states of the forward and backward GRUs at each word is then used as a contextual embedding of this word, intuitively representing the context in which the word is appearing. We can. also understand it as representing the set of questions to which this word may be an answer..\nSimilarly the question is read by a bidirectional GRU but in this case only the final hidden states are concatenated to form the question embedding\nThe attention over each word in the context is then calculated as the dot product of its contextual. embedding with the question embedding. This attention is then normalized by the softmax functior. and summed across all occurrences of each answer candidate. The candidate with most accumulated attention is selected as the final answer.\nFor the models pre-trained on CNN/DM we also anonymized the tasks in a way similar to the. pre-training dataset - i.e. we replaced all names of characters and also all words that can appear as. answers for the given task by anonymous tags in the style of CNN/DM. This gives even models that. have not seen any training examples from the target domain a chance to answer the questions\nSecondly, we will look on transfer to the SQuAD dataset (Rajpurkar et al.2016); here the associated task may be already useful in the real world. Although cloze-style questions have the huge advantage in the possibility of being automatically generated from a suitable corpus - the path taken by CNN/DM and the BookTest - in practice humans would use a proper question, not its cloze-style substitute This brings us to the need of transfer from the data-rich cloze-style training to the domain of proper questions where data are much scarcer due to the necessary human annotation.\nFor our experiments we selected only a subset of the SQuAD training and development examples where the answer is only a single word, since this is an inherent assumption of our machine learning model. This way we extracted 28,346 training examples out of the original 100,000 examples and 3,233 development examples out of 10,570.\nFor a more detailed description of the model including equations check Kadlec et al. (2016b)\nP(Obama|question, document) Document encoder Question encoder (Bidir-GRU) (Bidir-GRU) Word embeddings (Matrix) 4- - - Obama and Putin said Obama in Prague xxxxx visited Prague Document Question Figure 1: Structure of the|AS Reader|model."}, {"section_index": "4", "section_name": "4 EXPERIMENTS: TRANSFER LEARNING IN TEXT COMPREHENSION", "section_text": "Now let us turn in more detail to the three kinds of experiments that we performed\nIn the first experiment we tested how a model trained on one of the large-scale pre-training dataset performs on the bAbI tasks without any opportunity to train on bAbI. Since the BookTest anc CNN/DM tasks involve only cloze-style questions, we can't expect a model trained on them to answe natural ?-style questions. Hence we did not study the transfer to SQuAD in this case, only the transfe to the (cloze-converted) bAbI tasks.\nFirst we tested how the[AS Reader architecture (Kadlec et al.]2016b) can handle the tasks if trainec. directly on the bAbI training data for each task. Then we tested the degree of transfer from the BookTest and CNN/DM data to the 11 selected bAbI tasks..\nThe main part of this first experiment was then straightforward: we pre-trained multiple models o. the BookTest and CNN/DM datasets and then simply evaluated them on the test datasets of the 1 selected bAbI tasks."}, {"section_index": "5", "section_name": "4.1.2 RESULTS", "section_text": "Table[1summarizes the results of this experiment. Both the models trained on the BookTest and those trained on the CNN/DM dataset perform quite poorly on bAbI and achieve much lower accuracy than\n1It should be noted that there are several machine learning models that perform better than the|AS Reade. in the 10k weakly supervised setting, e.g. (Sukhbaatar et al.[2. 2015]Xiong et al.2016Graves et al.2016 however they often need significant fine-tuning. On the other hand we trained plain|AS Reader|model withou. any modifications. Hyperparameter and feature fine-tuning could probably further increase its performance. on individual tasks however it goes directly against the idea of generality that is at the heart of this work. Fo comparison with state of the art we include results of DMN+ (Xiong et al.]2016) in Table[1|which had the bes average performance over the original 20 tasks.\nIn the first part of the experiment we trained a separate instance of the[AS Reader|on the 10,o00 example version of the bAbI training data for each of the 11 tasks (for more details see Appendix[B.1) On 8 of them the architecture was able to learn the task with accuracy at least 95%['(results for each. task can be found in Table|4 in Appendix [C). Hence if given appropriate training the|AS Reader is capable of the reasoning needed to solve most of the selected bAbI tasks. Now when we know that the[AS Reader|is powerful enough to learn the target tasks we can turn to transfer from the two large-scale datasets.\nthe models trained directly on each individual bAbI task. However there is some transfer between the tasks since the|AS Reader trained on either the BookTest or CNN/DM outperforms a random. baselinq2land even an improved baseline which selects the most frequent word from the context that. also appears as an answer in the training data for this task..\nThe results also show that the models trained on CNN/DM perform somewhat better on most task. than the BookTest models. This may be due to the fact that bAbI tasks generally require the model tc summarize information from the context document, which is also what the CNN/DM dataset is testing. On the other hand, the BookTest requires prediction of a possible continuation of a story, where. the required kind of reasoning is much less clear but certainly different from pure summarization Another explanation for better performance of CNN/DM models might be that they solve slightly. simpler task since the candidate answers were already pre-selected in the entity anonymization step\nConclusions from this experiment are that the skills learned from two large-scale datasets generalize. surprisingly poorly to even simple toy tasks. This may make us ask whether most teams' focus on. solving narrow tasks is truly beneficial if the skills learnt on these tasks are hard to apply elsewhere However it also brings us to our next experiment, where we try to provide some help to the struggling. pre-trained models.\nAfter showing that the skills learnt from the BookTest and CNN/DM datasets are by themselves. insufficient for solving the toy tasks, the next natural question is whether they are useful if helped by. training on a small sample of examples from the target task. We call this additional phase of trainin, target adjustment. For this experiment we again use the bAbI tasks, however we also test transfe. to a subset of the SQuAD dataset, which is much closer to real-world natural-language questior. answering.\nThe results presented in this and the following section are based on training 3701 model instances"}, {"section_index": "6", "section_name": "4.2.1 METHOD", "section_text": "Common to bAbI and SQuAD datasets. In this experiment we started with a pre-trained model which we used in the previous experiment. However, after it finished training on one of the large pre-training datasets, we allowed it to train on a subset of training examples from the target dataset. We tried subsets of various sizes ranging from a single example to thousands. We tried training four different pre-trained models and also, for comparison, four randomly-initialized models with the same hyperparameters (see Appendix B.2|for details). The experiment with each task-model couple was run on 4 different data samples of each size which were randomly drawn from the training dataset\n2The random baseline selects randomly uniformly between all unique words contained in the contex document.\nTable 1: The mean performance across 11 bAbI tasks. The first two columns show a random baseline2 and a baseline that selects the most frequent word from the context which also appears as an answer in the training data for the task. The following three columns show performance of the|AS Reader trained on different datasets, the last column shows the results of DMN+ (Xiong et al.[2016), the state-of-the-art-model on the bAbI 10k dataset. For more detailed results listing per task accuracies see AppendixC\nModel Rnd. Most freq. cand. AS Reader DMN+ not bAbI BookTest CNN/DM bAbI bAbI Train dataset trained 10k 14M 1.2M 10k 10k bAbI mean (11 tasks) 6.1 29.9 34.8 38.1 92.7 95.7\nMean of best-validation test accuracy for the 11 bAbl tasks Accuracies on SQuAD 1.00 - 0.75 0.4 reeeeereeeey 0.2 0.25 Model type Model type BookTest Pre-trained Fully pre-trained BookTest Random Pre-trained embeddings CNN/DM Pre-trained Pre-trained encoders CNN/DM Random Randomly initialized 0.00 0.0 10 100 500 1000 5000 10 100 500 1000 5000 10000 28127 # training examples # training examples (a) (b)\nbAbI. For each of these models we observed the test accuracy at the best-validation epoch and compared this number between the randomly initialized and pre-trained models. Validation was done using 100 examples which were set aside from the task's original 10k training data4|we perform the experiment with models pre-trained on the BookTest and also on CNN/DM.\nSQuAD subset. In the SQuAD experiment, we trained the model on a subset of the original training. dataset where answers were only single words and its sub-subsets. We report the best-validation accuracy on a development set filtered in the same way. This experiment was performed only with. the models pre-trained on BookTest.\nThe results of these experiments are summarized in Figures2|and 3\nTask 1 Task 4 Task 5 1.00 1.00 1.00 0.75 0.75 0.75 reeeeerr eeey reeeenre aeey reeeenreaeeyy Model type. .. .. BookTest pre-trained 0.50 0.50 0.50 BookTest random. CNN/DM pre-trained CNN/DM random 0.25 0.25 0.25 0.00 0.00 0.00 1 10 100 10005000 1 10 100 10005000 1 10 100 1000 5000 # training examples # training examples # training examples\nFigure 3: Example of 3 bAbI tasks where pre-training seems to help. Note that the task may be easier for the CNN/DM models due to answer anonymization which restricts the choice of possible answers\nWe are planning to release the split training datasets soon.\nFigure 2: Sub-figure (a) shows the average across the 11 bAbI tasks of the best-validation model's test accuracy. (b) shows the test accuracy on SQuAD of each model we trained (the points) and the lines join the accuracies of the best-validation models for each training size.\n4The other models trained on the full 10k dataset usually use 1000 validation examples (Sukhbaatar et al.. 015 Xiong et al.2016), however we wanted to focus on low data regime thus we used 10 times less examples.\nbAbI. Sub-figure2a|shows mean test accuracy of the models that achieved the best validation result for each single task. The results for both BookTest and CNN/DM experiments confirm positive effect of pre-training compared to randomly initialized baseline. Figure[3|shows performance on selected bAbI tasks where pre-training has clearly positive effect, such plot for each of the target tasks is provided in AppendixC.2(Figure4).\nNote that the CNN/DM models cannot be directly compared to BookTest results due to entity anonymization that seems to simplify the task when the model is trained on smaller datasets\nSince our evaluation methodology with different training set sizes is novel, we can compare our. result only to MemN2N (Sukhbaatar et al.2015) trained on a 1k dataset. MemN2N is the only. weakly supervised model that reports accuracy when trained on less than 10k examples. MemN2N. achieves average accuracy 93.2%dlon the eleven selected tasks. This is substantially better than both. our random baseline (78.0%) and the BookTest-pre-trained model (79.5%), however our model is. not tuned in any way towards this particular task. One important conceptual difference is that the. AS Reader|processes the whole context as one sequence of words, whereas MemN2N receives the context split into single sentences, which simplifies the task for the network.\nSOuAD subset. The results of SQuAD experiment also confirm positive effect of pre-training, see Sub-figure[2b] for now compare just lines showing performance of the fully pre-trained model and. the randomly initialized model - the meaning of the remaining two lines shall become clear in the. next section.\nMore detailed statistics about the results of this experiment can be found in Appendix|D\nWe should note that performance of our model is not competitive with the state of the art models on this dataset. For instance the DCR model (Yu et al.[ 2016) trained on our SQuAD subset achieves validation accuracy 74.9% in this task which is better than our randomly initialized (35.4%) and pre-trained (51.6%) models6 However, the DCR model is designed specifically for the SQuAD task for instance it utilizes features that are not used by our model.\nSince our previous experiment confirmed positive effect of pre-training if followed by target-domain adjustment, we wondered which part of the model contains the knowledge transferable to new domains. To examine this we performed the following experiment."}, {"section_index": "7", "section_name": "4.3.1 METHOD", "section_text": "Our machine learning model, the[AS Reader consists of two main parts: the word-embedding look-up and the bidirectional GRUs used to encode the document and question (see Figure|1). Therefore a natural question was what the contribution of each of these parts is.\nTo test this we created two models out of each pre-trained model used in the previous experiment The first model variant uses the pre-trained word embeddings from the original model while the GRU encoders are randomly initialized. We say that this model has pre-trained embeddings. The second model variant uses the opposite setting where the word embeddings are randomly initialized while the encoders are taken form a pre-trained model. We call this pre-trained encoders.\nbAbI. For this experiment we selected only a subset of tasks with training set of 100 examples where. there was significant difference in accuracy between randomly-initialized and pre-trained models. Foi evaluation we use the same methodology as in the previous experiment, that is, we report accuracy of. the best-validation model averaged over 4 training splits.\nSQuAD subset. We evaluated both model variants on all training sets from the previous SQuAD experiment using the same methodology.\n5MemN2N trained on each single task with PE LS RN features, see Sukhbaatar et al. 2015) for detail 6We would like to thankYu et al.(2016) for training their system on our dataset.\nbAbI task (100 ex.) SQuAD Task Model variant. 1. 5. 11. 14. (28k ex.) Random init. 53% 66% 71% 33% 31% A Pre-trained encoders +6 +25 +4 +2 +4 Pre-trained embeddings +17 +6 +8 +8 +10 Pre-trained full +34 +22 +14 +13 +17 Pre-trained word2vec -2 +5 +1 -1 +5\nbAbI. Table|2|shows improvement of pre-trained models over a randomly initialized baseline. In most cases (all except Task 5) the fully pre-trained model achieved the best accuracy.\nSQuAD subset. The accuracies of the four model variants are plotted in Figure[2b|together with. results of the previous SQuAD experiment. The graph shows that both pre-trained embeddings anc. pre-trained encoders alone improve performance over the randomly initialized baseline, however the fully pre-trained model is always the best..\nThe overall result of this experiment is that both pre-training of the word embeddings and pre-training of the encoder parameters are important since the fully pre-trained model outperforms both partiall pre-trained variants."}, {"section_index": "8", "section_name": "5 CONCLUSION", "section_text": "Our experiments show that transfer from two large cloze-style question-answering datasets to our. two target tasks is suprisingly poor, if the models aren't provided with any examples from the target domain. However we show that models that pre-trained models perform significantly better than a randomly initialized model if they are shown at least a few training examples from the target domain. The usefulness of pre-trained word embeddings is well known in the NLP community however we. show that the power of our pre-trained model does not lie just in the embeddings. This suggests that once the text-comprehension community agrees on sufficiently versatile model, much larger parts of. the model could start being reused than just the word-embeddings..\nThe generalization of skills from a training domain to new tasks is an important ingredient of any system we would want to call intelligent. This work is an early step to explore this direction"}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger. Schwenk, and Yoshua Bengio. Learning Phrase Representations using RNN Encoder-Decoder for\nTable 2: The effect of pre-training different components of the model for selected tasks. The first row shows performance (average test accuracy across all trained model instances in each category) of a randomly initialized baseline model. The following three rows show increase in accuracy (measured n percent absolute) when the model is initialized with weights pre-trained on the BookTest. The last line shows results for models initialized with Google News word2vec word embeddings (Mikolov et al.2013).\nYiming Cui, Ting Liu, Zhipeng Chen, Shijin Wang, and Guoping Hu. Consensus Attention-base Neural Networks for Chinese Reading Comprehension. 2016b..\nBhuwan Dhingra, Hanxiao Liu, William W. Cohen, and Ruslan Salakhutdinov. Gated-Attentior Readers for Text Comprehension. 2016. URLhttp://arxiv.org/abs/1606.01549\nKarl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa. Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Advances in Neura Information Processing Systems, pp. 1684-1692, 2015\nFelix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Reading children's books with explicit memory representations. arXiv preprint arXiv:1511.02301, 2015..\nRudolf Kadlec, Ondrej Bajgar, and Jan Kleindienst. From Particular to General : A Preliminary Case Study of Transfer Learning in Reading Comprehension. MAIN Workshop at NIPs, 2016a.\nRudolf Kadlec, Martin Schmid, Ondej Bajgar, and Jan Kleindienst. Neural Text Understanding with Attention Sum Reader. Proceedings of ACL, 2016b\nLili Mou, Zhao Meng, Rui Yan, Ge Li, Yan Xu, Lu Zhang, and Zhi Jin. How Transferable are Neural Networks in NLP Applications? EMNLP, 2016.\nRonan Collobert, Jason Weston, Leon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa Natural Language Processing ( Almost ) from Scratch. Journal ofMachine Learning Research 12 12:2461-2505, 2011.\nPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100,o00+ Questions for. Machine Comprehension of Text. (ii),2016. URL|http://arxiv.org/abs/1606.05250\nYelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu Chen. ReasoNet: Learning to Stop Readin. in Machine Comprehension. 2016. URLhttp://arxiv.0rg/abs/1609.05284\nAlessandro Sordoni, Phillip Bachman, and Yoshua Bengio. Iterative Alternating Neural Attention for Machine Reading. 2016.\nSainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-To-End Memory Network. pp.1-11,2015. URLhttp://arxiv.0rg/abs/1503.08895\nJason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart Van Merri, Armand Joulin and Tomas Mikolov. Towards AI-complete Question Answering: A Set of Prerequisite Toy Tasks 2016. URLhttps://arxiv.0rg/abs/1502.05698\nCaiming Xiong, Stephen Merity, and Richard Socher. Dynamic Memory Networks for Visual and Textual Question Answering. ICML, 2016. URLhttp://arxiv.org/abs/1603.01417"}, {"section_index": "10", "section_name": "A CLOZE STYLE BABI DATASET", "section_text": "Since our AS Reader architecture is designed to select a single word from the context document as. an answer (the task of CBT and BookTest), we selected 10 bAbI tasks that fulfill this requirement out of the original 20. These tasks are: 1. single supporting fact, 2. two supporting facts, 3. three. supporting facts, 4. two argument relations, 5. three argument relations, 11. basic coreference, 12 conjunction, 13. compound coreference, 14. time reasoning and 16. basic induction..\nTask 15 needed a slight modification to satisfy this requirement: we converted the answers into plural (e.g. \"Q: What is Gertrude afraid of? A: wolf.' was converted into \"'A: wolves\" which also seems to. be the more natural way to formulate the answer to such a question.).\nAlso since CBT and BookTest train the model for Cloze-style question answering, we modify the original bAbI dataset by reformulating the questions into Cloze-style. For example we translate a. question 'Where is John ?' to \"John is in the XXXXX ..\nSinno Jialin Pan and Qiang Yang. A Survey on Transfer Learning. IEEE Transactions on Knowledge. and Data Engineering, 22(10):1345-1359, oct 2010. ISSN 1041-4347. doi: 10.1109/TKDE 2009.191. URLhttp://ieeexplore.ieee.org/1pdocs/epic03/wrapper.htm?. arnumber=5288526\nDavid Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484-489, 2016. 1SSN 0028-0836. doi: 10.1038/nature16961. URLhttp://dx.doi.0rg/10.1038/nature16961\nXiaojin Zhu and Andrew B Goldberg. Introduction to semi-supervised learning. Synthesis lectures on artificial intelligence and machine learning, 3(1):1-130, 2009.\nFor the models pre-trained on CNN/DM we also replace two kinds of words by anonymized tags (e.g. \"@entity56') in a style similar to the pre-training dataset. Specifically we replace two (largely overlapping) categories of words:"}, {"section_index": "11", "section_name": "B METHOD DETAILS", "section_text": "Here we give a more detailed description of the method we used to arrive to our results. We highlight. only facts particular to this experiment. A more detailed general description of training the AS Reader is given in (Kadlec et al.2016b).\nThe results given for AS Reader trained on bAbI are each for a single model with 64 hidden units in each direction of the GRU context encoder and embedding dimension 32 trained on the 10k training data provided with that particular task.\nThe results for AS Reader trained on the BookTest and the CNN/DM are for a greedy ensemble consisting of 4 models whose predictions were simply averaged. The models and ensemble were all validated on the validation set corresponding to the training dataset. The performance on the bAbI tasks oscillated notably during training however the ensemble averaging does somewhat mitigate this to get more representative numbers.\nTable 3: Hyperparameters for both the randomly initialized and the pre-trained models\nFigure|4|shows the test accuracies of all models that we trained in the target-adjustment experiments as well as lines joining the accuracies of the best-validation models.\n1. Proper names of story characters (e.g. John, Sandra) 2. Any word that can appear as an answer for the particular task (e.g. kitchen, garden if th task is asking about locations).\nDataset Hid. Units Emb. L. rate Dropout BookTest 768 256 0.0001 0 BookTest 384 384 0.0005 0.2 BookTest 384 384 0.0005 0.4 BookTest 512 384 0.0001 0 CNN/DM 128 128 0.001 0 CNN/DM 256 128 0.001 0 CNN/DM 384 128 0.001 0 CNN/DM 384 384 0.001 0\nTable4 shows detailed results for the experiments on models which were just pre-trained on one of. he pre-training datasets without any target-adjustment. It also shows several baselines and results of a state-of-the-art model.\nNNO+WC IMm 26:80 21 20 01:89 28:00 3L:20 14:00 O0:00 09 LI 40:80 3813 Assadaer Boorst IAM 08: 20 5030 09 L9 3300 3040 38.80 09 L7 3660 3482 2 00:001 00:001 00:001 00:001 00:001 IqAq 06 16 00:98 08:66 00 S6 0L:96 3030 02:16 +NWN (ennaas) 00:001 06:86 00001 OS66 00:001 00:001 00:001 0866 00:001 0L:66 3410 69:S6 IqAq IOR (Nd MI ST Hd) (e[8u1s) IqAq 1OR 00:001 00:001 00 66 00:001 00:001 0L:66 06:L6 0666 0666 00:001 48:20 8646 (N? SI 3d) (e[8u1s) IqAq 00:001 00:001 0L 16 0L 69 01:26 06:98 01:66 0866 0966 08:86 OL:86 13:13 aann pny IqAq 3120 2697 1914 33358 2I42 3042 21:2s 28 22 31:20 49:S5 28:65 tranee you 00:01 08 L A440 340 A40 029 0L 9 09 S 00'S 320 OS'L 90:9 snd Bunuoddns andue Sadtt Buooddns OtL (syset I 1) ueau 1qyq 2 3 4 2 4 5 7\nIAM 3130 28:S0 2220 3030 09 L9 3300 3040 3380 09 LZ 3660 1510 3482\nDO 3 2 3 3 22 5\n00:00 1 00:001 00:001 00:001 00:00 1 Iqvq 06:16 00:98 0866 IOK 00 S6 0L 96 S030 0Z:Z6\nIqvq 00:001 00:001 OL I6 OL 6S 0Z 26 0698 0166 0866 09.66 0886 0L 86 83:13\nIqAq IOR 3120 9697 1914 3338 2I42 3042 212s 31:20 49 S5 28:65\nlrnnne nou 00:00 08'L A40 34 0 A40 02 9 0L'9 09 S 00'S OS'L 909\nTask 1 Task 2 Task 3 1.00 - 1.00 - 1.00 - 0.75 0.75 0.75 reeseereaeeyy 0.50 0.25 - 0.25 0.25 ! 0.00 0.00 - 0.00 1 10 100 1000 10000 1 10 100 1000 10000 10 100 1000 10000 # training examples # training examples # training examples Task 4 Task 5 Task 11 1.00 - 1.00 - 1.00 0.75 0.75 0.75 eeseenreaeeyy 0.50 0.25 0.25 0.25 : 0.00 - 0.00 - 0.00 - 1 10 100 1000 10000 1 10 100 1000 10000 1 10 100 1000 10000 # training examples # training examples # training examples Task 12 Task 13 Task 14 1.00 - 1.00 - 1.00 0.75 0.75 0.75 reseeereaeeyy . . 0.50 0.25 0.25 0.25 7: 0.00 0.00 0.00 10 100 1000 10000 1 10 100 1000 10000 1 10 100 1000 10000 # training examples # training examples # training examples Task 15 Task 16 1.00 - 1.00 - 0.75 0.75 Model typee BookTest pre-trained 0.50 .BookTest random CNN/DM pre-trained 0.25 0.25 CNN/DM random 1 0.00 0.00 1 10 100 1000 10000 1 10 100 1000 10000 # training examples # training examples\nFigure 4: The test accuracies of all models that we trained in the target-adjustment experiments. The line joins the test accuracies of the best-validation models of each model type..\nFigure5lplots mean accuracy of all models trained in our experiments. This suggests that pre-training helped all models, not only the top performing ones selected by validation as already shown in Figure2a\nFigure 5: The average of the mean test accuracies across the 11 bAbI tasks. For the average of the best validation results see Figure|2a\nTable 5|shows the mean accuracy across all models trained for each combination of task, pre-trainin dataset and target-adjustment dataset size. Table[6|shows the corresponding standard deviations.\nTable[7|then shows the p-value that whether the expected accuracy of pre-trained models is greater than the expected accuracy of randomly initialized models. This shows that the pre-trained models are statistically significantly better for all target-adjustment set sizes on the SQuAD dataset. On bAbI the BookTest pre-trained models perform convincingly better especially for target-adjustment dataset sizes 100, 500 and 1000, with Task 16 being the main exception to this because the AS Reader struggles to learn it in any setting. For the CNN+DM pre-training the results are not conclusive.\nMean accuracy accross the 11 bAbl tasks 1.00 0.75 reenneeeaeey 0.50 0.25 Model type BookTest Pre-trained BookTest Random CNN/DM Pre-trained CNN/DM Random 0.00 1 10 100 500 1000 5000 # training examples.\nTable 5: Mean test accuracy for each combination of task, model type and target-adjustment set size\nTask Pretrain. set Model Target-adjustment set size 0 1 10 100 500 1000 5000 10000 28174 SQuAD BookTest pre-trained 0.025 0.027 0.049 0.122 NA 0.245 NA 0.388 0.484 SQuAD BookTest rand. init. 0.004 0.006 0.018 0.042 NA 0.107 NA 0.214 0.315 Task 1 BookTest pre-trained 0.356 0.383 0.459 0.870 0.992 0.995 0.999 NA NA Task 1 BookTest rand. init. 0.010 0.327 0.431 0.529 0.888 0.916 0.976 NA NA Task 1 CNN+DM pre-trained 0.295 0.385 0.519 0.689 0.969 0.985 0.990 NA NA Task 1 CNN+DM rand. init. 0.100 0.354 0.450 0.582 0.954 0.941 0.977 NA NA Task 2 BookTest pre-trained 0.206 0.295 0.318 0.339 0.398 0.410 0.755 0.783 NA Task 2 BookTest rand. init. 0.003 0.225 0.290 0.332 0.358 0.361 0.528 0.645 NA Task 2 CNN+DM pre-trained 0.177 0.265 0.288 0.359 0.410 0.398 0.539 0.586 NA Task 2 CNN+DM rand. init. 0.005 0.280 0.320 0.380 0.371 0.396 0.478 0.469 NA Task 3 BookTest pre-trained 0.159 0.192 0.227 0.314 0.440 0.508 0.759 0.857 NA Task 3 BookTest rand. init. 0.005 0.135 0.182 0.219 0.370 0.419 0.542 0.482 NA Task 3 CNN+DM pre-trained 0.164 0.213 0.222 0.303 0.450 0.489 0.585 0.687 NA Task 3 CNN+DM rand. init. 0.001 0.175 0.227 0.272 0.385 0.429 0.551 0.563 NA Task 4 BookTest pre-trained 0.452 0.490 0.545 0.631 0.986 0.989 1.000 NA NA Task 4 BookTest rand. init. 0.032 0.532 0.556 0.582 0.846 0.982 0.993 NA NA Task 4 CNN+DM pre-trained 0.323 0.413 0.596 0.766 0.946 0.986 0.992 NA NA Task 4 CNN+DM rand. init. 0.234 0.536 0.554 0.593 0.926 0.990 0.986 NA NA Task 5 BookTest pre-trained 0.601 0.604 0.632 0.877 0.983 0.982 0.991 NA NA Task 5 BookTest rand. init. 0.013 0.162 0.295 0.635 0.964 0.973 0.989 NA NA Task 5 CNN+DM pre-trained 0.448 0.492 0.581 0.842 0.969 0.984 0.989 NA NA Task 5 CNN+DM rand. init. 0.185 0.252 0.350 0.844 0.982 0.984 0.988 NA NA Task 11 BookTest pre-trained 0.334 0.415 0.620 0.847 0.986 0.988 0.998 NA NA Task 11 BookTest rand. init. 0.008 0.540 0.692 0.711 0.922 0.951 0.974 NA NA Task 11 CNN+DM pre-trained 0.119 0.492 0.671 0.762 0.820 0.972 0.977 NA NA Task 11 CNN+DM rand. init. 0.207 0.679 0.737 0.734 0.853 0.934 0.980 NA NA Task 12 BookTest pre-trained 0.307 0.429 0.694 0.786 0.988 0.991 0.999 NA NA Task 12 BookTest rand. init. 0.006 0.499 0.705 0.721 0.917 0.966 0.962 NA NA Task 12 CNN+DM pre-trained 0.236 0.518 0.650 0.779 0.866 0.968 0.970 NA NA Task 12 rand. init. 0.009 0.661 0.765 0.735 0.855 0.921 0.965 NA CNN+DM NA Task 13 BookTest pre-trained 0.330 0.505 0.793 0.944 0.959 0.976 0.998 NA NA Task 13 BookTest rand. init. 0.004 0.617 0.920 0.937 0.950 0.966 0.992 NA NA Task 13 CNN+DM pre-trained 0.114 0.612 0.830 0.942 0.949 0.946 0.975 NA NA Task 13 CNN+DM rand. init. 0.094 0.828 0.941 0.944 0.951 0.961 0.971 NA NA Task 14 BookTest pre-trained 0.270 0.266 0.273 0.465 0.775 0.807 0.896 0.912 NA Task 14 BookTest rand. init. 0.007 0.228 0.277 0.328 0.597 0.675 0.852 0.905 NA Task 14 CNN+DM pre-trained 0.280 0.314 0.351 0.458 0.677 0.790 0.840 0.904 NA Task 14 CNN+DM rand. init. 0.054 0.247 0.297 0.337 0.543 0.788 0.901 0.929 NA Task 15 BookTest pre-trained 0.085 0.417 0.436 0.491 0.544 0.546 0.689 0.853 NA Task 15 BookTest rand. init. 0.003 0.414 0.430 0.496 0.517 0.523 0.584 0.834 NA Task 15 CNN+DM pre-trained 0.563 0.604 0.591 0.608 0.611 0.635 0.644 0.597 NA Task 15 CNN+DM rand. init. 0.392 0.469 0.534 0.587 0.623 0.630 0.656 0.658 NA Task 16 BookTest pre-trained 0.036 0.456 0.451 0.465 0.469 0.474 0.528 0.566 NA Task 16 BookTest rand. init. 0.001 0.363 0.449 0.460 0.469 0.475 0.489 0.519 NA Task 16 CNN+DM pre-trained 0.444 0.467 0.468 0.474 0.480 0.505 0.519 0.547 NA Task 16 CNN+DM rand. init. 0.280 0.428 0.480 0.476 0.483 0.489 0.489 0.496 NA\nTask Pretrain. set Model Target-adjustment set size 0 1 10 100 500 1000 5000 10000 28174 SQuAD BookTest pre-trained 0.025 0.027 0.049 0.122 NA 0.245 NA 0.388 0.484 SQuAD BookTest rand. init. 0.004 0.018 0.042 NA 0.107 NA 0.006 0.214 0.315 Taks 1 BookTest pre-trained 0.356 0.383 0.459 0.870 0.992 0.995 0.999 NA NA Taks 1 BookTest rand. init. 0.010 0.327 0.431 0.529 0.888 0.916 0.976 NA NA Taks 1 CNN+DM 0.295 0.385 0.519 0.689 0.969 0.985 0.990 NA pre-trained NA Taks 1 CNN+DM rand. init. 0.100 0.354 0.450 0.582 0.954 0.941 0.977 NA NA Taks 2 BookTest pre-trained 0.206 0.295 0.318 0.339 0.398 0.410 0.755 0.783 NA Taks 2 BookTest rand. init. 0.003 0.225 0.290 0.332 0.358 0.361 0.528 0.645 NA Taks 2 CNN+DM pre-trained 0.177 0.265 0.288 0.359 0.410 0.398 0.539 0.586 NA Taks 2 CNN+DM rand. init. 0.005 0.280 0.320 0.380 0.371 0.396 0.478 0.469 NA Taks 3 BookTest pre-trained 0.159 0.192 0.227 0.314 0.440 0.508 0.759 0.857 NA Taks 3 BookTest rand. init. 0.005 0.135 0.182 0.219 0.370 0.419 0.542 0.482 NA Taks 3 CNN+DM pre-trained 0.164 0.213 0.222 0.303 0.450 0.489 0.585 0.687 NA Taks 3 CNN+DM rand. init. 0.001 0.175 0.227 0.272 0.385 0.429 0.551 0.563 NA Taks 4 BookTest pre-trained 0.452 0.490 0.545 0.631 0.986 0.989 1.000 NA NA Taks 4 BookTest rand. init. 0.032 0.532 0.556 0.582 0.846 0.982 0.993 NA NA Taks 4 CNN+DM pre-trained 0.323 0.413 0.596 0.766 0.946 0.986 0.992 NA NA Taks 4 CNN+DM rand. init. 0.234 0.536 0.554 0.593 0.926 0.990 0.986 NA NA Taks 5 BookTest pre-trained 0.601 0.604 0.632 0.877 0.983 0.982 0.991 NA NA Taks 5 BookTest rand. init. 0.013 0.162 0.295 0.635 0.964 0.973 0.989 NA NA Taks 5 CNN+DM pre-trained 0.448 0.492 0.581 0.842 0.969 0.984 0.989 NA NA Taks 5 CNN+DM rand. init. 0.185 0.252 0.350 0.844 0.982 0.984 0.988 NA NA Taks 11 BookTest pre-trained 0.334 0.415 0.620 0.847 0.986 0.988 0.998 NA NA Taks 11 BookTest rand. init. 0.008 0.540 0.692 0.711 0.922 0.951 0.974 NA NA Taks 11 CNN+DM pre-trained 0.119 0.492 0.671 0.762 0.820 0.972 0.977 NA NA Taks 11 CNN+DM rand. init. 0.207 0.679 0.737 0.734 0.853 0.934 0.980 NA NA Taks 12 BookTest 0.429 0.694 0.991 0.999 NA NA pre-trained 0.307 0.786 0.988 Taks 12 BookTest rand. init. 0.006 0.499 0.705 0.721 0.917 0.966 0.962 NA NA Taks 12 CNN+DM pre-trained 0.236 0.518 0.650 0.779 0.866 0.968 0.970 NA NA Taks 12 CNN+DM rand. init. 0.009 0.661 0.765 0.735 0.855 0.921 0.965 NA NA Taks 13 BookTest pre-trained 0.330 0.505 0.793 0.944 0.959 0.976 0.998 NA NA Taks 13 BookTest rand. init. 0.004 0.617 0.920 0.937 0.950 0.966 0.992 NA NA Taks 13 CNN+DM pre-trained 0.114 0.612 0.830 0.942 0.949 0.946 0.975 NA NA Taks 13 CNN+DM rand. init. 0.094 0.828 0.941 0.944 0.951 0.961 0.971 NA NA Taks 14 BookTest pre-trained 0.270 0.266 0.273 0.465 0.775 0.807 0.896 0.912 NA Taks 14 BookTest rand. init. 0.007 0.228 0.277 0.328 0.597 0.675 0.852 0.905 NA Taks 14 CNN+DM pre-trained 0.280 0.314 0.351 0.458 0.677 0.790 0.840 0.904 NA Taks 14 CNN+DM rand. init. 0.054 0.247 0.297 0.337 0.543 0.788 0.901 0.929 NA Taks 15 BookTest pre-trained 0.085 0.417 0.436 0.491 0.544 0.546 0.689 0.853 NA Taks 15 BookTest rand. init. 0.003 0.414 0.430 0.496 0.517 0.523 0.584 0.834 NA Taks 15 CNN+DM pre-trained 0.563 0.604 0.591 0.608 0.611 0.635 0.644 0.597 NA Taks 15 CNN+DM rand. init. 0.392 0.469 0.534 0.587 0.623 0.630 0.656 0.658 NA Taks 16 BookTest pre-trained 0.036 0.456 0.451 0.465 0.469 0.474 0.528 0.566 NA Taks 16 BookTest rand. init. 0.001 0.363 0.449 0.460 0.469 0.475 0.489 0.519 NA Taks 16 CNN+DM pre-trained 0.444 0.467 0.468 0.474 0.480 0.505 0.519 0.547 NA Taks 16 CNN+DM rand. init. 0.280 0.428 0.480 0.476 0.483 0.489 0.489 0.496 NA\nTable 6: Standard deviation in accuracies for each combination of task, model type and target adjustment set size.\n83-2223 23114 VN A A A A Z VN 33-3328 22-223 30--26:1 B4-62:2 80-222:1 10-4421 00001 VN VN VN VN VN VN VN VN 2:-224 13-282 233322 30-9996 13-22 0-090 33-223 44-2892 10-080 1 10-9561 20-0601 23-3321 10-2016 B4-21 S000 VN 80-121'S 10-292 20-3826 20-203 12-1222 80-22 22-222 33-023 t0-65'9 40-0602 10-1'1 10-448:1 10-999:1 30-0001 9++ 0001 33-6993 23-0224 20-224 0-89 44-4 10-3688 33-204 8333883 10-553:3 10-681 23-823 33-6992 3333301 10-202 6 40-0901 12-221 10-3981 10-611 000 VN 80-228L 23359 14-2823 22-24 40-2209 S0-68'I 22-8882 23-3821 10-686 10-2029 2-2 10-s50 I 10-5511 12-2 S0-556:1 001 10-999 5 61-188 10-16 10-181 34-2222 10-476:6 44-4929 10-666 10-16L'9 44-3381 10-0222 10-3089 10-222:1 11-2221 10-299:6 o 70-882 00+000 1 10-113 40-222s 18-l83 30-998 1 10-s026 10-0016 10-126 20-661 10-3821 10-226 10-0003 10-2+86 70-+92 53-3003 13-221 10-0196 11-:852 122-220 22-20 33-83 I2-234 10-118 5 24-726 30-4483 00+0001 70-698 13-32222 Prnnnnnnne Boorsr Boorsr Boorsr Boorsr CN+NNN CN+NNN CN+NNN CN+NNN Boorst Booorsr Boorosst Baolsr Booroost Boorsr Boorsr Wa+NNO Wa+NNO Wq+NNO Wa+NNO WU+NNM Wa+NNM Wa+NNO Booost avnds I ss 1aasis laass 1a ssg lass2 la ss3 Ta asss le ss asks ss task task2 asks task T sks Pis Task\nCOO MOJOq SandeA-d pdpis-due :/ dnqe"}] |
ByQPVFull | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Besides, many existing datasets (Everingham et al.2010)Deng et al. 2009 Xiao et al. 2010 provide more than one types of annotations. For example, the PASCAL VOC ( (Everingham et al. 2010) provides image level tags, object bounding box, and image segmentation masks; the ImageNet dataset (Deng et al.|[2o09) provide image level tags and a small portion of bounding box. Only using the image level tags for training image classification model would be a great waste on the other annotation resources. Therefore, in this work, we investigate whether these auxiliary annotations could also help a CNN model learn richer and more diverse feature representation.\nIn particular, we take advantage of these extra annotated information during training a CNN model for obtaining a single CNN model with sufficient inherent diversity, with the expectation that the model is able to learn more diverse feature representations and offers stronger generalization ability for image classification than vanilla CNNs. We therefore propose a group orthogonal convolu- tional neural network (GoCNN) model that is able to exploit these extra annotated information as privileged information. The idea is to learn different groups of convolutional functions which are \"orthogonal' to the ones in other groups. Here by \"orthogonal\"', we mean there is no significant correlation among the produced features. By \"privileged information', we mean these auxiliary in formation only been used during the training phase. Optimizing orthogonality among convolutional functions reduces redundancy and increases diversity within the architecture."}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Deep convolutional neural networks (CNNs) have brought a series of breakthroughs in image classi- fication tasks (He et al.]2015][Girshick]2015]Zheng et al.2015). Many recent works (Simonyan & Zisserman2014] He et al.2015 Krizhevsky et al.[2012) have observed that CNNs with different architectures or even different weight initializations may learn slightly different feature representa- tions. Combining these heterogeneous models can provide richer and more diverse feature repre-. sentation which can further boost the final performance. Such observation motivate us to directly. pursue feature diversity within a single model in the work..\nProperly defining the groups of convolutional functions in the GoCNN is not an easy task. In this work, we propose to exploit available privileged information for identifying the proper groups. Specifically, in the context of image classification, object segmentation annotations which are (par tially) available in several public datasets give richer information..\nIn addition, the background contents are usually independent on foreground objects within an image. Thus, splitting convolutional functions into different groups and enforcing them to learn features from the foreground and background separately can help construct orthogonal groups with small. correlations. Motivated by this, we introduce the GoCNN architecture which explores to learn dis criminative features from foreground and background separately where the foreground-backgrounc. segregation is offered by the privileged segmentation annotation for training GoCNN. In this way. inherent diversity of the GoCNN can be explicitly enhanced. Moreover, benefiting from pursuing the group orthogonality, the learned convolutional functions within GoCNN are demonstrated to b foreground and background diagnostic even for extracting features from new images in the testing. phase.\nTo the best of our knowledge, this work is the first to explore a principled way to train a deep neural. network with desired inherent diversity and the first to investigate how to use the segmentation. privileged information to assist image classification within a deep learning architecture. Experiments on ImageNet and PASCAL VOC clearly demonstrate GoCNN improves upon vanilla CNN models significantly, in terms of classification accuracy..\nLearning rich and diverse feature representations is always desired while training CNNs for gaining. stronger generalization ability. However, most existing works mainly focus on introducing hand-. crafted cost functions to implicitly pursue diversity (Tang 2013), or modifying activation functions to increase model non-linearity (Jin et al.]2015) or constructing a more complex CNN architec- ture (Simonyan & Zisserman|2014)|He et al.|2015] Krizhevsky et al.[|2012). Methods that explicitly encourage inherent diversity of CNN models are still rare so far..\nKnowledge distillation (Hinton et al.|2015) can be seen as an effective way to learn more discrimi native and diverse feature representations. The distillation process compresses knowledge and thus encourages a weak model to learn more diverse and discriminative features. However, knowledge. distillation works in two stages which are isolated from each other and has to rely on pre-training. a complicated teacher network model. This may introduce undesired computation overhead. In contrast, our proposed approach can learn a diverse network in a single stage without requiring an. extra network model. Similar works, e.g. the Diversity Networks (Sra & Hosseini), also squeeze the knowledge by preserving the most diverse features to avoid harming the performance..\nUsing privileged information to learn better features during the training process is similar in spirit. with our method. Both our proposed method and Lapin et al. (2014) introduce privileged information to assist the training process.However, almost all existing works (Lapin et al.]2014] Lopez-Paz. et al.]2016, Sharmanska et al.[2014) are based on SVM+ which only focuses on training a better classifier and is not able to do the end-to-end training for better features..\nSeveral works (Andrew et al.]2013] Srivastava & Salakhutdinov2012) about canonical correlatior analysis (CCA) for CNNs provide a way to constrain feature diversity. However, the goal of CCA.\nAs a by-product of implementing GoCNN, we also provide positive answers to the following two prominent questions about image classification: (1) Does background information indeed help ob ject recognition in deep learning? (2) Can a more precise annotation with richer information, e.g., segmentation annotation, assist the image classification training process non-trivially?\nMore recently, Cogswell et al. (2016) proposed the DeCov approach to reduce over-fitting risk of a. deep neural network model by reducing feature covariance. DeCov also agrees with increasing gen- eralization ability of a model by pursuing feature diversity. This is consistent with our motivation. However, DeCov penalizes the covariance in an unsupervised fashion and cannot utilize extra avail- able annotations, leading to insignificant performance improvement over vanilla models (Cogswell et al.2016)."}, {"section_index": "2", "section_name": "MODEL DIVERSITY OF CONVOLUTIONAL NEURAL NETWORKS", "section_text": "Throughout the paper, we use f() to denote a convolutional function (or filter) and k to index the. layers in a multi-layer network. We use c(k) to denote the total number of convolutional functions at. function at the k-th layer of the network. The function f maps an input feature map to another new feature map. The height and the width of a feature map output at the layer k are denoted as h(k) and. w(k) respectively. We consider a network model consisting of N layers in total..\nUnder a standard CNN architecture, the elements within the same feature map are produced by the (k) same convolutional function f and thus they represent the same type of features across differen1 locations. Therefore, encouraging the feature variance or diversity within a single feature map does not make sense. In this work, our target is to enhance the diversity among different convolutional functions. Here we first give a formal description of model diversity for an N-layer CNN.\nHere the operator cor(.,:) denotes the statistical correlation\nIn other words, the inherent diversity of a network model that we are going to maximize is evaluated across all the convolutional functions within the same layer.\nThe most straightforward way to maximize the above diversity for each layer is to directly maximize the quantity of g(k) during training the network. However, it is quite involved to optimize the hard diversity in (1) due to the large combination number of different convolutional functions. Thus, we propose to solve this problem by learning the convolutional functions in different groups separately. Different functions from different groups are uncorrelated to each other and we do not need to consider their correlation. Suppose the convolutional functions at each layer are partitioned into m different groups, denoted as G = {G1,..., Gm}. Then, we instead maximize the following Group-wise Model Diversity.\n|9| 1 cor s,t=1iEGs,jEGt\nIt is also worth to notice that simply adding a segmentation loss to image classification neural net- work is not equivalent to a GoCNN model. This is because image segmentation requires each pixel within the target area to be activated and the others stay silent for dense prediction, while GoCNN does not require the each pixel within the target area to be activated. GoCNN is specifically de signed for classification tasks, not for segmentation ones. Moreover, our proposed GoCNN supports learning from partial privileged information wile the CNN above needs a fully annotated training Set.\nDefinition 1 (Model Diversity). Let f denote the i-th convolutional function at the k-th layer of a neural network model, and then the model diversity of the k-th layer is defined as\nc(k) 1 L cor =1\nInstead of directly optimizing the model diversity, we consider optimizing the group-wise model di versity by finding a set of orthogonal groups {G*, ..., G*}, where convolutional functions within. each group are uncorrelated with others within different groups. In the scenario of image rep-. resentation learning, one typical example of such orthogonal groups is the foreground group and background group pair -- partitioning the functions into two groups and letting them learn features. from foreground and background contents respectively..\nFigure 1: Architectures of the proposed GoCNN used in the training (top) and testing (bottom phase. The two groups are colored by blue (foreground) and purple (background) respecively. FC represents the fully connected layer.\nIn this work, we use segmentation annotation as privileged information for finding orthogonal groups of convolutional functions G*,..., G*. In particular, we derive the foreground and background. segregation from the privileged information for an image. Then we partition convolutional functions at a specific layer of a CNN model into foreground and background groups respectively, and train a GoCNN model to learn the foreground and background features separately. Details about the architecture of the GoCNN and the training procedure of GoCNN are given in the following section."}, {"section_index": "3", "section_name": "GROUP ORTHOGONAL CONVOLUTIONAL NEURAL NETWORKS", "section_text": "We introduce the group orthogonal constraint to maximize group-wise diversity among different groups of convolutional functions explicitly by constructing a group orthogonal convolutional neu- ral network (GoCNN). Details on the architecture of GoCNN are shown in Figure1GoCNN is built upon a standard CNN architecture. The convolutional functions at the final convolution layer are explicitly divided into two groups: the foreground group which concentrates on learning the fore ground feature and the background group which learns the background feature. The output features of these two groups are then aggregated by a fully connected layer.\nIn the following subsections, we give more details of the foreground and background groups con struction. After that, we will describe how to combine these two components and build them into a unified network architecture - the GoGNN."}, {"section_index": "4", "section_name": "4.1 FOREGROUND AND BACKGROUND GROUPS", "section_text": "To learn convolutional functions that are specific for foreground content of an image, we propose the following two constraints for the foreground group of functions. The first constraint forces the functions to be learned from the foreground only and free of any contamination from the back ground, and the second constraint encourages the learned functions to be discriminative for image classification.\nMask Suppress Foreground Background Classifier Element-wise multiplication FC Pooling (avg) Split Mask 3:1 FC Main Classifier Element-wise multiplication Pooling (avg) Layer k-1 Layer k Layer 1 Layer 2 Layer k 11 Foreground Classifier Suppress Standard CNN without final layers, (Pooling/FC) Background (a) The architecture of GoCNN in training phase Background Image Main Classifier Layer 1 Layer 2 Layer k-1 Layer k Standard CNN Suppress Background (b) The architecture of GoCNN in testing phase\nWe learn features that only lie in the foreground by suppressing any contamination from the back ground. As aforementioned, here we use the object segmentation annotations (denoted as Mask as the privileged information in the training phase to help identify the background features where the foreground convolutional functions should not respond to. The background contamination is extracted by an extractor adopted on each feature map within the foreground group. In particular we define an extractor (:, :) as follows:\nx), Mask) = (x) o Mask\nIn the above operator, we use the background object mask Mask, to extract background features Each element in Mask, is equal to one if the corresponding position lies on a background object or zero otherwise. Here, we assume the masks are already re-sized to have compatible dimensionality with the output feature map f(k) (x) by the interpolation method so that the element-wise multipli- cation is valid here. The extracted background features are then suppressed by a regression loss defined as follows:\nmin x;0), Mask)[F 0\nFor the second constraint, i.e., encouraging the functions to learn discriminative features, we simpl use the standard softmax classification loss to supervise the learning phase.\nThe role of the background group is complementary to the foreground one. It aims to learn con-. volutional functions that are only specific for background contents. Thus, the functions within the background group have a same suppression term as in Eqn. (3), in which Mask, is replaced with. Mask to restrict the learned features to make them only lie in the background space. The Mask f. is simply computed as Masky = 1 - Masky. Also, a softmax linear classifier is attached dur-. ing training to guarantee that these learned background functions are useful for predicting image. categories."}, {"section_index": "5", "section_name": "4.2 ARCHITECTURE AND IMPLEMENTATION DETAILS OF THE GOCNN", "section_text": "In GoCNN, the size ratio of foreground group and background group is fixed to be 3:1 during training, since intuitively the foreground contents are much more informative than the backgroun contents in classifying images. A single fully connected layer (or multiple layers depending on the basic CNN architecture) is used to unify the functional learning within different groups and combine features learned from different groups. It aggregates the information from different feature spaces and produces the final image category prediction. More details are given in Figure[1\nBecause we are dealing with the classification problem, a main classifier with a standard classifica tion loss function is adopted at the top layer of GoCNN. In our experiments, the standard softmax loss is used for single-label image classification and the logistic regression loss is used for multiple label image classification, e.g., images from the Pascal VOC dataset (Everingham et al.|[2010).\nDuring the testing stage, parts unrelated to the final main output will be removed, as shown ir Figure[1(b)] Therefore, in terms of testing, neither extra parameters nor extra computational cost is introduced. The GoCNN is exactly the same as the adopted CNN in the testing phase..\nIn summary, for an incoming training sample, it passes through all the layers to the final convolution layer. Then the irrelevant features for each group (foreground or background) will be filtered out. by privileged segmentation masks. Those filtered features will then flow into a suppressor (see. Eqn. (3). For the output features from each group, it will flow up along two paths: one leads to the group-wise classifier, and the other one leads to the main classifier. The three gradients from the. suppressors, the group-wise classifiers and the main classifier will be used for updating the network parameters.\nApplications with Incomplete Privileged Information Our proposed GoCNN can also be ap plied for semi-supervised learning. When only a small subset of images have the privileged seg\nmentation annotations in a dataset, we simply set the segmentations of images without annotations to be Mask = Mask, = 1, where 1 is the matrix with all of its elements being 1. In other words.. we disable both the suppression terms (ref. Eqn. (3)) on foreground and background parts as well. as the extractors on the back propagation path. By doing so, fully annotated training samples with. privileged information will supervise GoCNN to learn both discriminative and diverse features while. the samples with only image tags only guide GoCNN to learn category discriminative features..\nImageNet ImageNet contains 1,000 fine-grained classes with about 1,300 images for each class and 1.2 million images in total, but without any image segmentation annotations. To collect. privileged information, we randomly select 130 images from each class and manually annotate. the object segmentation masks for them. Since our focus is on justifying the effectiveness of our. proposed method, rather than pushing the state-of-the-art, we only collect privileged information. for 10% data (overall 130k training images) to show performance improvement brought by our. model. We call the new dataset consisting of these segmented images as ImageNet-0.1m. For. evaluation, we use the original validation set of ImageNet which contains 50,oo0 images. Note. that neither our baselines nor the proposed GoCNN needs segmentation information in testing. phase. PASCAL VOC 2012 The PASCAL VOC 2012 dataset contains 11,530 images from 20 classes For the classification task, there are 5,717 images for training and 5,823 images for validation. We use this dataset to further evaluate the generalization ability of different models including. GoCNN trained on the ImageNet-0.1m: we pre-train the evaluated models on the ImageNet. 0.1m dataset and fine-tune them using the logistic regression loss on PASCAL VOC 2012 train. ing set. We evaluate their performance on the validation set..\nThe Basic Architecture of GoCNN In our experiments, we use the ResNet (He et al.] 2015) as. the basic architecture to build GoCNN. Since the deepest ResNet contains 152 layers which will cost several weeks to train, we choose a light version of architecture (ResNet-18 (He et al.|2015)) that contains 18 layers as our basic model for most cases. We also use the ResNet-152 (He et al. 2015) for experiments on the full ImageNet dataset. The final convolution layer gives a 7 7 output. and is pooled into a 1 1 feature map by average pooling. Then a fully connected layer is added tc. perform linear classification. The used loss function for the single class classification on ImageNe dataset is the standard softmax loss. When performing multi-label classification on PASCAL VOC. we use the logistic regression loss.\nTraining and Testing Strategy We use MXNet (Chen et al.l|2015) to conduct model training anc testing. The GoCNN weights are initialized as in|He et al.(2015) and we train GoCNN from scratch Images are resized with a shorter side randomly sampled within [256, 480] for scale augmentation and 224 224 crops are randomly sampled during training (He et al.J2015). We use SGD with base learning rate equal to 0.1 at the beginning and reduce the learning rate by a factor of 10 when the validation accuracy saturates. For the experiments on ResNet-18 we use single node with a mini batch size of 512. For the ResNet-152 we use 48 GPUs with mini-batch size of 32 for each GPU Following[He et al.(2015), we use a weight decay of 0.0001 and a momentum of 0.9 in the training\nCompared Baseline Models Our proposed GoCNN follows the Learning Using Privileged In formation (LUPI) paradigm (Lapin et al. [2014), which exploits additional information to facilitate\nWe evaluate the performance of GoCNN on two different testing settings: the complete privileged information setting and the partial privileged information setting. We perform 10-crop testing (Krizhevsky et al.]2012) for the complete privileged information scenario, and do a single crop testing for the partial privileged information scenario for convenience..\nTable 1: Validation accuracy (for 10-crop validation) of different models on ImageNet validation set. All the compared models are trained on the ImageNet-0.1m dataset with complete privileged information.\nlearning but does not require extra information in testing. There are a few baseline models falling into the same paradigm that we can compare with. One is the SVM+ method (Pechyony & Vapnik 2011) and the other one is the standard model (i.e., the ResNet-18). We simply refer to ResNet-18 by baseline if no confusion occurs. In the experiments, we implement the SVM+ using the code pro- vided by Pechyony & Vapnik[(2011) with default parameter settings and linear kernel. We follow the scheme as described in|Lapin et al.[(2014) to train the SVM+ model. More concretely, we train multiple one-versus-rest SVM+ models upon the deep features extracted from both the entire images and the foreground regions (used as the privileged information). We use the averaged pooling over 10 crops on the feature maps before the FC layer as the deep feature for training SVM+. It is worth noting that all of these models (including SVM+ and GoCNN) use a linear classifier and thus have the same number of parameters, or more concretely, GoCNN does not require more parameters than SVM+ and the vanilla ResNet."}, {"section_index": "6", "section_name": "5.2 TRAINING MODELS WITH COMPLETE PRIVILEGED INFORMATION", "section_text": "In this subsection, we consider the scenario where every training sample has complete privilegec. segmentation information. Firstly, we evaluate the performance of our proposed GoCNN on the ImageNet-0.1m dataset. Table 1 summarizes the accuracy of different models. As can be seer from the results, given the complete privileged information, our proposed GoCNN presents mucl. better performance than compared models. The group orthogonal constraints successfully regu. larize the learned feature to be within the foreground and background. The trained GoCNN thu. presents a stronger generalization ability. It is also interesting (although not surprising) to observe. that, when foreground features with background features are combined, the performance of GoCNN. can be further improved from 49.60% to 50.39% in terms of top-1 accuracy. One can observe tha the background information indeed benefits object recognition to some extent. To further investi gate the contribution of each component within GoCNN to final performance, we conduct anothe experiment and show the results in Table2 In the experiments, we purposively prevent the gradi. ent propagation from the other components except the one being investigated during training, anc. perform another setting on the baseline method where the background is removed and only the fore. ground object is reserved in each training sample, noted as Baseline-obj. Comparing the result o. Full GoCNN between different classifiers, we can see that learning background features can actually. improve the overall performance. And when we compare the Fg_classifier between Baseline-obj. Only Fg and Full GoCNN, we can see the importance of the background information in training. more robust and richer foreground features.\nSecondly, to verify the effectiveness of learning features in two different groups with our proposed method, we visualize the maximum activation value within each group of feature maps of several testing images. The feature maps are generated by the final convolution layer with 384 384 resolution input testing images. Then, the final convolution layer gives 12 12 output maps. We aggregate feature maps within the same group into one feature map by max operation. As can be seen from Figure 2] foreground and background features are well separated and the result looks just like the semantic segmentation mask. Compared with the baseline model, more neurons are activated in our proposed method in the two orthogonal spaces. This indicates that more diverse and discriminative features are learned in our framework compared with the baseline method. Finally we further evaluate the generalization ability of our proposed method on the PASCAL VOC dataset. It is well known that an object shares many common properties with others even if they are not from the same category. A well-performing CNN model should be able to learn robust features rather than just fit the training images. In this experiment, we fine-tune different models on the PASCAL VOC images to test whether the learned features are able to generalize well to another dataset. Note that\nTop-1 Accuracy (%) Top-5 Accuracy (%) Main_classifier Fg_classifier Bg_classifier Main_classifier Fg_Classifier Bg_Classifier SVM+ 37.53 - Baseline 46.00 70.05 Full GoCNN 50.39 49.60 40.03 75.00 74.21 66.98\nTable 2: Validation accuracy (for 10-crop validations) of different components of GoCNN on Ima geNet validation set. Baseline-obj refers to the baseline model trained on pure object ImageNet-0.1m dataset, Only Bg refers to our proposed model with foreground part gradient blocked, and Only Fg refers to our proposed model with background part gradient blocked. (* marks the part which shares the same classifier with the main classifier.)\nTable 3: Classification results on PASCAL VOC 2012 (train/val). The performance is measured by Average Precision (AP, in %).\nVera PTecision (Af, 70). Model areo bike bird boat bottle bus car cat chair cow table dog horse mbk prsn plant sheep sofa train tv mean Baseline 95.2 79.3 90.2 82.8 52.6 90.9 78.5 90.2 62.3 64.9 64.5 84.2 81.1 82.0 91.4 50.0 78.0 61.1 92.7 77.5 77.5 GoCNN 96.1 81.0 90.8 85.3 56.0 92.8 78.9 91.5 63.6 69.7 65.1 84.8 84.0 83.9 92.3 52.0 83.9 64.2 93.8 78.6 79.4 a) Input (b) GoCNN-Fg (c) GoCNN-Bg (d) GoCNN-Full (e) Baseline\nFigure 2: Activation maps of foreground feature maps (GoCNN-Fg), background feature maps (GoCNN-Bg) and all feature maps (GoCNN-Full) produced by our proposed GoCNN on ImageNet. validation set. The bottom row shows the activation maps produced by the baseline model.\nwe add another convolution layer with a 1 1 kernel size and 512 outputs as an adaptive layer or. all models. It is not necessary to add such a layer in networks without a residual structure (He et al. 2015). As can be seen from Table[3] our proposed network shows better results and higher average. precision across all categories, which means our proposed GoCNN learns more representative and. richer features that are easier to transfer from one domain to another."}, {"section_index": "7", "section_name": "5.3 TRAINING GOCNN WITH PARTIAL PRIVILEGED INFORMATION", "section_text": "The validation accuracies of GoCNN and the baseline model (i.e., the ResNet-18) are shown i1. Table 4 From the results, one can observe that with the increasing percentage of privileged in. :ormation, the accuracy will continuously increase until the percentage of privileged informatior reaches 80%. The performance on increasing the percentage from 40% to 100% is only 0.71% com pared with 0.92% on the increasing from 20% to 40%. This is probably because the suppressior. losses are more effective than we expected; that is, with very little guidance from the suppressior.\nTop-1 Accuracy (%) Top-5 Accuracy (%) Main_classifier Fg_classifier Bg_classifier Main_classifier Fg_Classifier Bg_Classifier Baseline-obj 12.45 12.45* 24.43 24.43* Only Bg 40.36 40.36* 67.24 67.24* Only Fg 49.15 49.15* 73.70 73.70* Full GoCNN 50.39 49.60 40.03 75.00 74.21 66.98\nIn this subsection, we investigate the performance of different models with only using partial priv ileged information. The experiment is also conducted on the ImageNet-0.1m dataset. We evaluate the performance of our proposed GoCNN by varying the percentage of privileged information (i.e. percentage of training images with segmentation annotations) from 20% to 100%.\nTo verify the effectiveness of GoCNN on very large training dataset with more complex CNN ar. chitectures, we conducted another experiment on the complete ImageNet-1k dataset with only 10% privileged information, and we use the 152-layer ResNet as our basic model. As can be seen from. Table 5] our proposed GoCNN achieves 21.8% top-1 error while the vanilla ResNet-152 has 23.0% top-1 error. Such performance boost is consistent with the results shown in Table 4] which agair. confirms the effectiveness of the GoCNN..\nTable 4: Validation accuracy (Top-1, in %, 1 crop validation) with 20%, 40%, 60%, 80% and 100% privileged information. Since the baseline method (ResNet-18) does not use privileged information its validation accuracy remains the same across different tests..\nModel 20% 40% 60% 80% 100% Baseline (ResNet-18) 44.26 44.26 44.26 44.26 44.26 GoCNN-18 47.00 47.92 48.18 48.61 48.63\nTable 5: Validation error rate (in %, 1 crop validation) with 10% privileged information on ful ImageNet-1k dataset.\nModel Top-1 err. Top-5 err. ResNet-101 He et al.(2015 23.6 7.1 ResNet-152 He et al.2015 23.0 6.7 GoCNN-152 21.8 6.1\nBased on our experimental results, we can also provide answers to the following two importar questions.\nDoes background information indeed help object recognition for deep learning methods? Based or our experiments, we give a positive answer. Intuitively, background information may provide som \"hints\" for object recognition. However, though several works (Song et al.|. 2011 Russakovsky et al.[[2012) have proven the usefulness of background information when using handcraft features few works have studied the effectiveness of background information on deep learning methods fo. object recognition tasks. Based on the experimental results shown in Table[2] both the foregrounc. classification accuracy and the overall classification accuracy can be further boosted with our pro. posed framework. This means that the background deep features can also provide useful informatior. for foreground object recognition.\nCan a more precise annotation with richer information, e.g., segmentation annotation, assist the. image classification training process? The answer is clearly yes. In fact, in recent years, severa works have explored how object detection and segmentation can benefit each other (Dai et al.]2015. Hariharan et al.]2014). However, none of existing works has studied how image segmentation. information can help train a better classification deep neural network. In this work, by treating. the segmentation annotations as the privileged information, we first demonstrate a possible way tc. utilize segmentation annotations to assist image classification training."}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "We proposed a group orthogonal neural network for image classification which encourages learning. more diverse feature representations. Privileged information is utilized to train the proposed GoCNN. model. To the best of our knowledge, we are the first to explore how to use image segmentation as privileged information to assist CNN training for image classification..\nGalen Andrew, Raman Arora, Jeff Bilmes, and Karen Livescu. Deep canonical correlation analysis In Proceedings of the 30th International Conference on Machine Learning, pp. 1247-1255, 2013.\nTianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu. Chiyuan Zhang, and Zheng Zhang. Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems. arXiv preprint arXiv:1512.01274, 2015.\nBharath Hariharan, Pablo Arbelaez, Ross Girshick, and Jitendra Malik. Simultaneous detection anc segmentation. In Computer vision-ECCV 2014, pp. 297-312. Springer, 2014.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition. arXiv preprint arXiv:1512.03385, 2015.\nXiaojie Jin, Chunyan Xu, Jiashi Feng, Yunchao Wei, Junjun Xiong, and Shuicheng Yan. Deep learning with s-shaped rectified linear activation units. arXiv preprint arXiv:1512.07030, 2015.\nMaksim Lapin, Matthias Hein, and Bernt Schiele. Learning using privileged information: Svm- and weighted svm. Neural Networks, 53:95-108, 2014.\nDmitry Pechyony and Vladimir Vapnik. Fast optimization algorithms for solving svm+. Stat. Learn ing and Data Science, 2011.\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014\nZheng Song, Qiang Chen, Zhongyang Huang, Yang Hua, and Shuicheng Yan. Contextualizing object detection and classification. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pp. 1585-1592. IEEE, 2011.\nSuvrit Sra and Reshad Hosseini. Geometric optimization in machine learning\nMichael Cogswell, Faruk Ahmed, Ross Girshick, Larry Zitnick, and Dhruv Batra. Reducing over fitting in deep networks by decorrelating representations. ICLR, 2016.\nMark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman The pascal visual object classes (voc) challenge. International journal of computer vision, 88(2): 303-338, 2010.\nNitish Srivastava and Ruslan R Salakhutdinov. Multimodal learning with deep boltzmann machines In Advances in neural information processing svstems. . 2222-2230. 2012"}] |
rk5upnsxe | [{"section_index": "0", "section_name": "NORMALIZING THE NORMALIZERS: COMPARING ANL EXTENDING NETWORK NORMALIZATION SCHEMES", "section_text": "Mengye Ren*', Renjie Liao*', Raquel Urtasun', Fabian H. Sinz', Richard S. Zemelt*\n{mren, rjliao, urtasun}@cs.toronto.edu fabian.sinz@epagoge.de, zemel@cs.toronto.e\nNormalization techniques have only recently begun to be exploited in supervised learning tasks. Batch normalization exploits mini-batch statistics to normalize the activations. This was shown to speed up training and result in better models However its success has been very limited when dealing with recurrent neural networks. On the other hand, layer normalization normalizes the activations across all activities within a layer. This was shown to work well in the recurrent setting. In this paper we propose a unified view of normalization techniques, as forms of divisive normalization, which includes layer and batch normalization as special cases. Our second contribution is the finding that a small modification to these normalization schemes, in conjunction with a sparse regularizer on the activations, leads to significant benefits over standard normalization techniques We demonstrate the effectiveness of our unified divisive normalization framework in the context of convolutional neural nets and recurrent neural networks, showing improvements over baselines in image classification, language modeling as well as super-resolution."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Standard deep neural networks are difficult to train. Even with non-saturating activation functions such as ReLUs (Krizhevsky et al.2012), gradient vanishing or explosion can still occur, since the Jacobian gets multiplied by the input activation of every layer. In AlexNet (Krizhevsky et al. 2012), for instance, the intermediate activations can differ by several orders of magnitude. Tuning. hyperparameters governing weight initialization, learning rates, and various forms of regularization thus become crucial in optimizing performance..\nIn current neural networks, normalization abounds. One technique that has rapidly become a standard. is batch normalization (BN) in which the activations are normalized by the mean and standard deviation of the training mini-batch (Ioffe & Szegedy||2015). At inference time, the activations are normalized by the mean and standard deviation of the full dataset. A more recent variant, laye normalization (LN), utilizes the combined activities of all units within a layer as the normalizer (Ba et al.|2016). Both of these methods have been shown to ameliorate training difficulties caused by poor initialization, and help gradient flow in deeper models..\nA less-explored form of normalization is divisive normalization (DN) (Heeger 1992), in which. a neuron's activity is normalized by its neighbors within a layer. This type of normalization is a well established canonical computation of the brain (Carandini & Heeger 2012) and has been. extensively studied in computational neuroscience and natural image modelling (see Section [2) However, with few exceptions (Jarrett et al.]2009, Krizhevsky et al.]2012) it has received little. attention in conventional supervised deep learning..\nHere, we provide a unifying view of the different normalization approaches by characterizing them as the same transformation but along different dimensions of a tensor, including normalization across"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "examples, layers in the network, filters in a layer, or instances of a filter response. We explor the effect of these varieties of normalizations in conjunction with regularization, on the predictior performance compared to baseline models. The paper thus provides the first study of divisive normalization in a range of neural network architectures, including convolutional neural networks (CNNs) and recurrent neural networks (RNNs), and tasks such as image classification, language modeling and image super-resolution. We find that DN can achieve results on par with BN in CNN networks and out-performs it in RNNs and super-resolution, without having to store batch statistics We show that casting LN as a form of DN by incorporating a smoothing parameter leads to significan gains, in both CNNs and RNNs. We also find advantages in performance and stability by being able to drive learning with higher learning rate in RNNs using DN. Finally, we demonstrate that adding ai L1 regularizer on the activations before normalization is beneficial for all forms of normalization."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "In this section we first review related work on normalization, followed by a brief description regularization in neural networks."}, {"section_index": "4", "section_name": "2.1 NORMALIZATION", "section_text": "Recently Ioffe & Szegedy (2015) demonstrated that standardizing the activations of the summed inputs of neurons over training batches can substantially decrease training time in deep neural networks. To avoid covariate shift, where the weight gradients in one layer are highly dependent on previous layer outputs, Batch Normalization (BN) rescales the summed inputs according to their variances under the distribution of the mini-batch data. Specifically, if zj,n denotes the activation of a neuron j on example n, and B(n) denotes the mini-batch of examples that contains n, then BN computes an affine function of the activations standardized over each mini-batch:\n1 n F 3 Ez zm,j B(n) mEB(n)\nHowever, training performance in Batch Normalization strongly depends on the quality of the aquired statistics and, therefore, the size of the mini-batch. Hence, Batch Normalization is harder to apply in cases for which the batch sizes are small, such as online learning or data parallelism While classification networks can usually employ relatively larger mini-batches, other applications such as image segmentation with convolutional nets use smaller batches and suffer from degraded performance. Moreover, application to recurrent neural networks (RNNs) is not straightforward and leads to poor performance (Laurent et al.]2015).\nSeveral approaches have been proposed to make Batch Normalization applicable to RNNs. Cooijmans. et al.[(2016) and Liao & Poggio (2016) collect separate batch statistics for each time step. However neither of this techniques address the problem of small batch sizes and it is unclear how to generalize them to unseen time steps.\nMore recently, Ba et al.(2016) proposed Layer Normalization (LN), where the activations are normalized across all summed inputs within a layer instead of within a batch:\n1 zn,k |L(j)l E[znD2 kEL(j)\nwhere L(j) contains all of the units in the same layer as j. While promising results have been shown on RNN benchmarks, direct application of layer normalization to convolutional layers often leads tc\nNormalization of data prior to training has a long history in machine learning. For instance, local. contrast normalization used to be a standard effective tool in vision problems (Pinto et al.]2008 Jarrett et al.2009 Sermanet et al.,2012 Le2013). However, until recently, normalization was usually not part of the machine learning algorithm itself. Two notable exceptions are the original. AlexNet by Krizhevsky et al.(2012) which includes a divisive normalization step over a subset of. features after ReLU at each pixel location, and the work by[Jarrett et al.[(2009) who demonstrated that a combination of nonlinearities, normalization and pooling improves object recognition in two-stage. networks.\na degradation of performance. The authors hypothesize that since the statistics in convolutional layer can vary quite a bit spatially, normalization with statistics from an entire layer might be suboptimal\nLiao et al.[(2016a) proposed to accumulate the normalization statistics over the entire training phase and showed that this can speed up training in recurrent and online learning without a deteriorating effect on the performance. Since gradients cannot be backpropagated through this normalization. operation, the authors use running statistics of the gradients instead..\nExploring the normalization of weights instead of activations, Salimans & Kingma (2016) proposed a reparametrization of the weights into a scale independent representation and demonstrated that this can speed up training time\nDivisive Normalization (DN) on the other hand modulates the neural activity by the activity of a pool of neighboring neurons (Heeger1992) Bonds|1989). DN is one of the most well studied and widely found transformations in real neural systems, and thus has been called a canonical computation of the brain (Carandini & Heeger2012). While the exact form of the transformation can differ, all formulations model the response of a neuron z; as a ratio between the acitivity in a summation field A;, and a norm-like function of the suppression field B.\nL Ui Zi z;EA 1 2\nwhere Su.} are the summation weights and Swr} the s pression weights\nPrevious theoretical studies have outlined several potential computational roles for divisive normal- ization such as sensitivity maximization (Carandini & Heeger 2012), invariant coding (Olsen et al. 2010), density modelling (Balle et al.2016), image compression (Malo et al.]2006), distributed neural representations (Simoncelli & Heeger 1998), stimulus decoding (Ringach2009| Froudarakis et al.[2014), winner-take-all mechanisms (Busse et al.[2009), attention (Reynolds & Heeger 2009) redundancy reduction (Schwartz & Simoncelli]2001| Sinz & Bethge2008Lyu & Simoncelli2008 Sinz & Bethge[2013), marginalization in neural probabilistic population codes (Beck et al.2011) and contextual modulations in neural populations and perception (Coen-Cagli et al.| 2015, Schwartz et al.]2009).\nVarious regularization techniques have been applied to neural networks for the purpose of improving generalization and reduce overfitting. They can be roughly divided into two categories, depending on whether they regularize the weights or the activations..\nRegularization on Weights: The most common regularizer on weights is weight decay which just amounts to using the L2 norm squared of the weight vector. An L1 regularizer (Goodfellow et al. 2016) on the weights can also be adopted to push the learned weights to become sparse. Scardapane et al.[(2016) investigated mixed norms in order to promote group sparsity.\nRegularization on Activations: Sparsity or group sparsity regularizers on the activations have shown to be effective in the past (Roz[[2008, Kavukcuoglu et al.[2009) and several regularizers have been proposed that act directly on the neural activations. Glorot et al.(2011) add a sparse regularizer on the activations after ReLU to encourage sparse representations. Dropout developed by Srivastava et al.(2014) applies random masks to the activations in order to discourage them to co-adapt. DeCov proposed byCogswell et al.[(2015) tries to minimize the off-diagonal terms of the sample covariance matrix of activations, thus encouraging the activations to be as decorrelated as possible.Liao et al. (2016b) utilize a clustering-based regularizer to encourage the representations to be compact.\n(a) Batch-Norm (b) Layer-Norm (c) Div-Norm\nFigure 1: Illustration of different normalization schemes, in a CNN. Each H W-sized feature map is depicted as a rectangle; overlays depict instances in the set of C filters; and two examples from a mini-batch of size N. are shown, one above the other. The colors show the summation/suppression fields of each scheme.."}, {"section_index": "5", "section_name": "A UNIFIED FRAMEWORK FOR NORMALIZING NEURAL NETS", "section_text": "We first compare the three existing forms of normalization, and show that we can modify batch normalization (BN) and layer normalization (LN) in small ways to make them have a form that matches divisive normalization (DN). We present a general formulation of normalization, where existing normalizations involve alternative schemes of accumulating information. Finally, we propose a regularization term that can be optimized jointly with these normalization schemes to encourage decorrelation and/or improve generalization performance."}, {"section_index": "6", "section_name": "3.1 GENERAL FORM OF NORMALIZATION", "section_text": "Without loss of generality, we denote the hidden input activation of one arbitrary layer in a deep. neural network as z E RN L. Here N is the mini-batch size. In the case of a CNN, L = H W C. where H, W are the height and width of the convolutional feature map and C is the number of filters For an RNN or fully-connected layers of a neural net, L is the number of hidden units..\nDifferent normalization methods gather statistics from different ranges of the tensor and then perforn normalization. Consider the following general form:\nWi,jXn,i+l Un.I Zn,j-EAn a[z\nwhere A; and B; are subsets of z and v respectively. A and B in standard divisive normalization are referred to as summation and suppression fields (Carandini & Heeger|2012). One can cast each normalization scheme into this general formulation, where the schemes vary based on how they define these two fields. These definitions are specified in Table[1] Optional parameters y and can be added in the form of y; zn ; + 3, to increase the degree of freedom.\nSmoothing the Normalizers: One obvious way in which the normalization schemes differ is in terms of the information that they combine for normalizing the activations. A second more subtle but important difference between standard BN and LN as opposed to DN is the smoothing term , in the denominator of Eq. (1). This term allows some control of the bias of the variance estimation, effectively smoothing the estimate. This is beneficial because divisive normalization does not utilize information from the mini-batch as in BN. and combines information from a smaller field than LN. A\nFig.1 shows a visualization of the normalization field in a 4-D ConvNet tensor setting. Divisive normalization happens within a local spatial window of neurons across filter channels. Here we set l(., :) to be the spatial Loo distance.\nModel Range Normalizer Bias An,j={zm,j: m E[1,N],j E [1,H] [1,W]} BN = 0 Bn,j ={Vm,j: m E[1,N],j E [1,H] [1,W]} LN An,j ={Zn,i :i E[1,L]} Bn,j ={Vn,i: i E[1,L]} = 0 DN An,j={zn,i:d(i,j) < RA} Bn,j={Vn,i: d(i,j) < RB} 0\nTable 1: Different choices of the summation and suppression fields A and B, as well as the constant o in the normalizer lead to known normalization schemes in neural networks. d(i, j) denotes an arbitrary distance between two hidden units i and j, and R denotes the neighbourhood radius.\nFigure 2: Divisive normalization followed by ReLU can be viewed as a new activation function. Left: Effect of varying o in this activation function. Right: Two units affect each other's activation in the DN+ReLU. formulation.\nsimilar but different denominator bias term max(o, c) appears in (Jarrett et al.[2009), which is active when the activation variance is small. However, the clipping function makes the transformation no invertible, losing scale information..\nMoreover, if we take the nonlinear activation function after normalization into consideration, we finc that will change the overall properties of the non-linearity. To illustrate this effect, we use a simple. 1-layer network which consists of: two input units, one divisive normalization operator, followed by. a ReLU activation function. If we fix one input unit to be O.5, varying the other one with differen. values of produces different output curves (Fig.2] left). These curves exhibit different non-lineai. properties compared to the standard ReLU. Allowing the other input unit to vary as well results in. different activation functions of the first unit depending on the activity of the second (Fig.2l right). This illustrates potential benefits of including this smoothing term o, as it effectively modulates the. rectified response to vary from a linear to a highly saturated response..\nIn this paper we propose modifications of the standard BN and LN which borrow this additive term o in the denominator from DN. We study the effect of incorporating this smoother in the respective normalization schemes below.\nL1 regularizer: Filter responses on lower layers in deep neural networks can be quite correlated. which might impair the estimate of the variance in the normalizer. More independent representations help disentangle latent factors and boost the networks performance (Higgins et al.||2016). Empirically. we found that putting a sparse (L1) regularizer.\nlel Range Normalizer Bias An,j ={zm,j: m E [1,N],j E[1,H] [1,W]} = 0 Bn,j ={Vm,j : m E [1,N],j E[1,H] [1,W]} An,j ={Zn,i:i E[1,L]} Bn,j={Vn,i:i E[1,L]} = 0 An,j={zn,i:d(i,j) < Rs} Bn,j={Vn,i: d(i,j) < RB} 0 0\nReLU DN+ReLU 3 4 4.0 4 ReLU 3.5 1.2 DN+ReLU sigma=4.0 DN+ReLU sigma=2.0 2 3 3.0 3 1.0 DN+ReLU sigma=1.0 DN+ReLU sigma=0.5 2.5 2 0.8 2.0 0.6 1.5 0.4 1 1.0 1 0 0.5 0.2 -3 -2 -1 0 1 2 3 0 0.0 0 0.0 0 1 2 3 4 0 1 2 3 Input Input 1 Input 1\n1 > NL n,j\non the centered activations vn.; helps decorrelate the filter responses (Fig.5). Here, N is the batch size and L is the number of hidden units, and L11 is the regularization loss which is added to the training loss.\nA possible explanation for this effect is that the L1 regularizer might have a similar effect as maximum likelihood estimation of an independent Laplace distribution. To see that, let pv (v) exp (- v and x = W-1v, with W a full rank invertible matrix. Under this model pr (x) = p, (Wx) |det W|\nThen, minimization of the L1 norm of the activations under the volume-conserving constraint det A =- const. corresponds to maximum likelihood on that model, which would encourage decorrelated responses. We do not enforce such a constraint, and the filter matrix might even not be invertible However, the supervised loss function of the network benefits from having diverse non-zero filters This encourages the network to not collapse filters along the same direction or put them to zero, and. might act as a relaxation of the volume-conserving constraint.."}, {"section_index": "7", "section_name": "3.3 SUMMARY OF NEW MODELS", "section_text": "DN and DN*: We propose DN as a new local normalization scheme in neural networks. In convolutional layers, it operates on a local spatial window across filter channels, and in fully connectec layers it operates on a slice of a hidden state vector. Additionally, DN* has a L1 regularizer on the pre-normalization centered activation (vn,j)\nLN-s and LN*: We apply the same changes as from BN to BN-s and BN*. In order to narrow the. differences in the normalization schemes down to a few parameter choices, we additionally remove the affine transformation parameters y and from LN such that the difference between LN* and DN* is only the size of the normalization field. y and can really be seen as a separate layer and in. practice we find that they do not improve the performance in the presence of o2.."}, {"section_index": "8", "section_name": "4 EXPERIMENTS", "section_text": "We evaluate the normalization schemes on three different tasks\nFor each model, we perform a grid search of three or four choices of each hyperparameter including. the smoothing constant o, and L1 regularization constant a, and learning rate e on the validation set"}, {"section_index": "9", "section_name": "4.1 CIFAR EXPERIMENTS", "section_text": "We used the standard CNN model provided in the Caffe library. The architecture is summarized ir Table[2 We apply normalization before each ReLU function. We implement DN as a convolutional operator, fixing the local window size to 5 5, 3 3, 3 3 for the three convolutional layers in all the CIFAR experiments.\nWe set the learning rate to 1e-3 and momentum 0.9 for all experiments. The learning rate schedule is set to {5K, 30K, 50K} for the baseline model and to {30K, 50K, 80K} for all other models. At every stage we multiply the learning rate by O.1. Weights are randomly initialized from a zero-mean normal distribution with standard deviation {1e-4, 1e-2, 1e-2} for the convolutional layers, and {1e-1, 1e-1} for fully connected layers. Input images are centered on the dataset image mean.\nTable3|summarizes the test performances of BN*, LN* and DN*, compared to the performance of a few baseline models and the standard batch and layer normalizations. We also add standard regularizers to the baseline model: L2 weight decay (WD) and dropout. Adding the smoothing constant and L1 regularization consistently improves the classification performance, especially fol\nBN-s and BN*: To compare with DN and DN*, we also propose modifications to original BN: we denote BN-s with o2 in the denominator's square root, and BN* with the L1 regularizer on top of BN-s.\nCNN image classification: We apply different normalizations on CNNs trained on the CIFAR-10/100 datasets for image recognition, each of which contains 50,000 training images and 10,000 test images. Each image is of size 32 32 3 and has been labeled ar object class out of 10 or 100 total number of classes.. RNN language modeling: We apply different normalizations on RNNs trained on the. Penn Treebank dataset for language modeling, containing 42,068 training sentences, 3,370 validation sentences, and 3,761 test sentences. CNN image super-resolution: We train a CNN on low resolution images and learn cascades of non-linear filters to smooth the upsampled images. We report performance of trainec. CNN on the standard Set 14 and Berkeley 200 dataset.\nTable 2: CIFAR CNN specification\nType Size Kernel Stride input 32 x 32 x 3 conv +relu 32 32 x 32 5 5 3 32 1 max pool 16 x 16 32 3 x 3 2 conv +relu 16 16 32 5 x 5 x 32 x 32 1 avg pool 8 x 8 x 32 3 3 2 conv +relu 8 x 8 x 64 5 x 5 x 32 x 64 1 avg pool 4 x 4 x 64 3 3 2 fully conn. linear 64 fully conn. linear. 10 or 100\nTable 3: CIFAR-10/100 experiments\nthe original LN. The modification of LN makes it now better than the original BN, and only slightly worse than BN*. DN* achieves comparable performance to BN* on both datasets, but only relying. on a local neighborhood of hidden units..\nResNet Experiments. Residual networks (ResNet) )(He et al. 2016), a type of CNN with residual connections be- tween layers, achieve impressive performance on many image classification benchmarks. The original architecture uses BN by default. If we remove BN, the architecture is very difficult to train or converges to a poor solution. We first reproduced the original BN ResNet-32, obtaining 92.6% accuracy on CIFAR- 10, and 69.8% on CIFAR-100. Our best DN model achieves 91.3% and 66.6%, respectively. While this performance is lower than the original BN-ResNet, there is certainly room to improve as we have not performed any hyperparameter opti- mization. Importantly, the beneficial effects of sigma (2.5% gain on CIFAR-100) and the L1 regularizer (0.5%) are still found, even in the presence of other regularization techniques such as data augmentation and weight decay in the training\nSince the number of sigma hyperparameters scales with the. number of layers, we found that setting sigma as a learnable. parameter for each layer helps the performance (1.3% gain on. CIFAR-100). Note that training this parameter is not possible in the formulation by Jarrett et al.(2009). The learned sigma. shows a clear trend: it tends to decrease with depth, and in the last convolution layer it approaches O (see Fig.3).\nSize Kernel Stride ype put 32 x 32 x 3 nv +relu 32 32 x 32 5 x 5 x 3 x 32 1 ax pool 16 16 32 3 x 3 2 nv +relu 16 x 16 x 32 5 x 5 x 32 x 32 1 8 x 8 x 32 3 x 3 2 g pool nv +relu 8 x 8 x 64 5 x 5 x 32 x 64 1 g pool 4 x 4 x 64 3 x 3 2 lly conn. linear 64 lly conn. linear 10 or 100\nModel CIFAR-10 Acc. CIFAR-100 Acc. Baseline 0.7565 0.4409 Baseline +WD +Dropout 0.7795 0.4179 BN 0.7807 0.4814 LN 0.7211 0.4249 BN* 0.8179 0.5156 LN* 0.8091 0.4957 DN* 0.8122 0.5066\nCIFAR-10 CIFAR-100 30 1.0 4 25 0.8 20 3 0.6 15 z 2 0.4 10 1 0.2 5 0.0 0 0 0 10 20 0 10 20 Sigma Sigma\nCIFAR-10 CIFAR-100 30 1.0 25 0.8 20 3 0.6 Nuwmer 15 larer 2 0.4 10 1 0.2 5 0.0 0 0 0 10 20 0 10 20 Sigma Sigma\nFigure 3: Input scale ([x) vs. learned at each layer, color coded by the layer number in ResNet-32, trained on CIFAR-10 (left), and CIFAR-1O0 (right).\nTable 4: PTB Word-level language modeling experiments\nof the neighborhood. Although the hidden states are randomly initialized, this structure will impose local competition among the neighbors\nR 1 2R+1 r- -R O R 1\nWe follow Cooijmans et al.(2016)'s batch normalization implementation for RNNs: normalizers are separate for input transformation and hidden transformation. Let BN(), LN(), DN(.) be BatchNorm, LayerNorm and DivNorm, and q be either tanh or ReLU..\nht+1 =g(WxXt+Wnht-1+b) (BN) = g(BN(Wzxt+bx)+ BN(Wnhi_1 BN +1 (LN) =g(LN(Wxxt+Wnht) (LN) 6 t+1 -1 DN) =g(DN(Wxxt+ Wnht DN\nNote that in recurrent BN, the additional parameters y and are shared across timesteps whereas the moving averages of batch statistics are not shared. For the LSTM version, we followed the released implementation from the authors of layer normalization[] and apply LN at the same places as BN and BN*, which is after the linear transformation of W,x and Wh individually. For LN* and DN, we modified the places of normalization to be at each non-linearity, instead of jointly with a concatenated vector for different non-linearity. We found that this modification improves the performance and makes the formulation clearer since normalization is always a combined operation with the activation function. We include details of the LSTM implementation in the Appendix.\nThe RNN model is provided by the Tensorflow library (Abadi et al.2016) and the LSTM version was. originally proposed inZaremba et al.(2014). We used a two-layer stack-RNN of size 400 (vanilla. RNN) or 200 (LSTM). R is set to 60 (vanilla RNN) and 30 (LSTM). We tried both tanh and ReLU as the activation function for the vanilla RNN. For unnormalized baselines and BN+ReLU, the initial learning rate is set to O.1 and decays by half every epoch, starting at the 5th epoch for a maximum ol. 13 epochs. For the other normalized models, the initial learning rate is set to 1.0 while the schedule is. kept the same. Standard stochastic gradient descent is used in all RNN experiments, with gradien clipping at 5.0.\nTable4 shows the test set perplexity for LSTM models and vanilla models. Perplexity is defined as. ppl = exp(- . log p(x)). We find that BN and LN alone do not improve the final performance. relative to the baseline, but similar to what we see in the CNN experiments, our modified versions BN* and LN* show significant improvements. BN* on RNN is outperformed by both LN* and DN. By applying our normalization, we can improve the vanilla RNN perplexity by 20%, comparable tc. an LSTM baseline with the same hidden dimension..\nhttps://github.com/ryankiros/layer-norm\nModel LSTM TanH RNN ReLU RNN Baseline 115.720 149.357 147.630 BN 123.245 148.052 164.977 LN 119.247 154.324 149.128 BN* 116.920 129.155 138.947 LN* 101.725 129.823 116.609 DN* 102.238 123.652 117.868\nht+1= g(WxXt+ Wnht-1+b (BN) =g(BN(Wxxt+bx)+ BN(Wnht BN t+1 h(LN) + b)) (LN) =g(LN(Wxxt+Wnht_1) Nt+1 (DN) DN) =g(DN(Wxxt+ Wnh) +1\nTable 5: Average test results of PSNR and SSIM on Set14 Dataset\nTable 6: Average test results of PSNR and SSIM on BSD200 Dataset.\nWe also evaluate DN on the low-level computer vision problem of single image super-resolution We adopt the SRCNN model ofDong et al.(2016) as the baseline which consists of 3 convolutional layers and 2 ReLUs. From bottom to top layers, the sizes of the filters are 9, 5, and 52 The number of filters are 64, 32, and 1, respectively. All the filters are initialized with zero-mean Gaussian and standard deviation 1e-3. Then we respectively apply batch normalization (BN) and our divisive normalization with L1 regularization (DN*) to the convolutional feature maps before ReLUs. We construct the training set in a similar manner asDong et al.(2016) by randomly cropping 5 million patches (size 33 33) from a subset of the ImageNet dataset ofDeng et al.[(2009). We only train our model for 4 million iterations which is less than the one adopted by SRCNN, i.e., 15 million, as the gain of PSNR and SSIM by spending that long time is marginal.\nWe report the average test results, utilizing the standard metrics PSNR and SSIM (Wang et al. 2004) on two standard test datasets Set14 (Zeyde et al.[2010) and BSD200 (Martin et al.]2001) We compare with two state-of-the-art single image super-resolution methods, A+ (Timofte et al.||2013) and SRCNN (Dong et al.]2016). All measures are computed on the Y channel of YCbCr color space We also provide a visual comparison in Fig.\nAs show in Tables5|and6|DN* outperforms the strong competitor SRCNN, while BN does not perform well on this task. The reason may be that BN applies the same statistics to all patches of. one image which causes some overall intensity shift (see Figs. 4). From the visual comparisons, we can see that our method not only enhances the resolution but also removes artifacts, e.g., the ringing. effect in Fig.4\nFinally, we investigated the differential effects of the o2 term and the L1 regularizer on the perfor. mance. We ran ablation studies on CIFAR-10/100 as well as PTB experiments. The results are listec in Table7\nWe find that adding the smoothing term o2 and the L1 regularization consistently increases the performance of the models. In the convolutional networks, we find that L1 and o both have similar effects on the performance. L1 seems to be slightly more important. In recurrent networks, o2 has a much more dramatic effect on the performance than the L1 regularizer\nFig. 5 plots randomly sampled pairwise pre-normalization responses (after the linear transform). in the first layer at the same spatial location of the feature map, along with the average pair-wise\n2we use the setting of the best model out of all three SRCNN candidates\nModel PSNR (x3) SSIM (x3) PSNR (x4) SSIM (x4) Bicubic 27.54 0.7733 26.01 0.7018 A+ 29.13 0.8188 27.32 0.7491 SRCNN 29.35 0.8212 27.53 0.7512 BN 22.31 0.7530 21.40 0.6851 DN* 29.38 0.8229 27.64 0.7562\nModel PSNR (x3) SSIM (x3) PSNR (x4) SSIM (x4) Bicubic 27.19 0.7636 25.92 0.6952 A+ 27.05 0.7945 25.51 0.7171 SRCNN 28.42 0.8100 26.87 0.7378 BN 21.89 0.7553 21.53 0.6741 DN* 28.44 0.8110 26.96 0.7428\nPSNR 29.84dB PSNR 31.33dB PSNR 23.94dB PSNR 31.46dB PSNR 29.41dB PSNR 33.14dB PSNR 21.88dB PSNR 33.43dB PSNR 27.46dB PSNR 30.12dB PSNR 23.91dB PSNR 30.19dB (a) Bicubic (b) SRCNN (c) BN (d) DN*\nFigure 4: Comparisons at a magnification factor of 4\ncorrelation coefficient (Corr) and mutual information (MI). It is evident that both o and L1 encourage. independence of the learned linear filters..\nThere are several factors that could explain the improvement in performance. As mentioned above adding the L1 regularizer on the activations encourages the filter responses to be less correlatec This can increase the robustness of the variance estimate in the normalizer and lead to an improve scaling of the responses to a good regime. Furthermore, adding the smoother to the denominato in the normalizer can be seen as implicitly injecting zero mean noise on the activations. While noise injection would not change the mean, it does add a term to the variance of the data, which i represented by o2. This term also makes the normalization equation invertible. While dividing b the standard deviation decreases the degrees of freedom in the data, the smoothed normalizatioi equation is fully information preserving. Finally, DN type operations have been shown to decreas the redundancy of filter responses to natural images and sound (Schwartz & Simoncelli||2001f |Sinz & Bethge2008 Lyu & Simoncelli]2008). In combination with the L1 regularizer this could lead to more independent representation of the data and thereby increase the performance of the network.\nWe have proposed a unified view of normalization techniques which contains batch and layer. normalization as special cases. We have shown that when combined with a sparse regularizer on. the activations, our framework has significant benefits over standard normalization techniques. We have demonstrated this in the context of both convolutional neural nets as well as recurrent neural. networks. In the future we plan to explore other regularization techniques such as group sparsity. We. also plan to conduct a more in-depth analysis of the effects of normalization on the correlations of. the learned representations.\nTable 7: Comparison of standard batch and layer normalation (BN and LN) models, to those with only L1 regularizer (+L1), only the o smoothing term (-s), and with both (*). We also compare divisive normalization with both (DN*), versus with only the smoothing term (DN)..\nModel CIFAR-10 CIFAR-100 LSTM Tanh RNN ReLU RNN Baseline 0.7565 0.4409 115.720 149.357 147.630 Baseline +L1 0.7839 0.4517 111.885 143.965 148.572 BN 0.7807 0.4814 123.245 148.052 164.977 BN +L1 0.8067 0.5100 123.736 152.777 166.658 BN-s 0.8017 0.5005 123.243 131.719 139.159 BN* 0.8179 0.5156 116.920 129.155 138.947 LN 0.7211 0.4249 119.247 154.324 149.128 LN +L1 0.7994 0.4990 116.964 152.100 147.937 LN-s 0.8083 0.4863 102.492 133.812 118.786 LN* 0.8091 0.4957 101.725 129.823 116.609 DN 0.8058 0.4892 103.714 132.143 118.789 DN* 0.8122 0.5066 102.238 123.652 117.868 Baseline BN BN +L1 BN-S BN* LN LN +L1 LN-S LN* DN DN* Corr. 0.19 Corr. 0.43 Corr. 0.17 Corr. 0.23 Corr. 0.17 Corr. 0.55 Corr. 0.17 Corr. 0.20 Corr. 0.16 Corr. 0.21 Corr. 0.20 MI 0.37 MI 1.20 MI 0.66 MI 0.80 MI 0.66 MI 1.41 MI 0.67 MI 0.74 MI 0.64 MI 0.81 MI 0.73\nAcknowledgements RL is supported by Connaught International Scholarships. FS would like tc. thank Edgar Y. Walker, Shuang Li, Andreas Tolias and Alex Ecker for helpful discussions. Supportec. by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/Interio. Business Center (DoI/IBC) contract number D16PC0oo03. The U.S. Government is authorized tc. reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotatior. thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should. not be interpreted as necessarily representing the official policies or endorsements, either expressec. or implied, of IARPA, DoI/IBC, or the U.S. Government..\nModel CIFAR-10 CIFAR-100 LSTM Tanh RNN ReLU RNN Baseline 0.7565 0.4409 115.720 149.357 147.630 Baseline +L1 0.7839 0.4517 111.885 143.965 148.572 BN 0.7807 0.4814 123.245 148.052 164.977 BN +L1 0.8067 0.5100 123.736 152.777 166.658 BN-s 0.8017 0.5005 123.243 131.719 139.159 BN* 0.8179 0.5156 116.920 129.155 138.947 LN 0.7211 0.4249 119.247 154.324 149.128 LN +L1 0.7994 0.4990 116.964 152.100 147.937 LN-s 0.8083 0.4863 102.492 133.812 118.786 LN* 0.8091 0.4957 101.725 129.823 116.609 DN 0.8058 0.4892 103.714 132.143 118.789 DN* 0.8122 0.5066 102.238 123.652 117.868 Baseline BN BN +L1 BN-S BN* LN LN +L1 LN-S LN* DN DN* Corr. 0.19 Corr. 0.43 Corr. 0.17 Corr. 0.23 Corr. 0.17 Corr. 0.55 Corr. 0.17 Corr. 0.20 Corr. 0.16 Corr. 0.21 Corr. 0.20 MI 0.37 MI 1.20 MI 0.66 MI 0.80 MI 0.66 MI 1.41 MI 0.67 MI 0.74 MI 0.64 MI 0.81 MI 0.73\nFigure 5: First layer CNN pre-normalized activation joint histogram"}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Abadi, Martin, Barham, Paul, Chen, Jianmin, Chen, Zhifeng, Davis, Andy, Dean, Jeffrey, Devin. Matthieu, Ghemawat, Sanjay, Irving, Geoffrey, Isard, Michael, Kudlur, Manjunath, Levenberg.. Josh, Monga, Rajat, Moore, Sherry, Murray, Derek Gordon, Steiner, Benoit, Tucker, Paul A., Vasudevan, Vijay, Warden, Pete, Wicke, Martin, Yu, Yuan, and Zhang, Xiaoqiang. Tensorflow: A system for large-scale machine learning. CoRR, abs/1605.08695, 2016.\nBa, Jimmy Lei, Kiros, Jamie Ryan, and Hinton, Geoffrey E. Layer normalization. CoRI abs/1607.06450, 2016.\nBalle, Johannes, Laparra, Valero, and Simoncelli, Eero P. Density modeling of images using a generalized normalization transformation. ICLR, 2016.\nCarandini, M. and Heeger, D. J. Normalization as a canonical neural computation. Nature review. Neuroscience, 13(1):51-62, nov 2012. 1SSN 1471-0048. doi: 10.1038/nrn3136.\nCoen-Cagli, R., Kohn, A., and Schwartz, O. Flexible gating of contextual influences in natural vision Nature Neuroscience, 18(11):1648-1655, 2015. 1SSN 1097-6256. doi: 10.1038/nn.4128\nDeng, Jia, Dong, Wei, Socher, Richard, Li, Li-Jia, Li, Kai, and Fei-Fei, Li. Imagenet: A large-scale hierarchical image database. In CVPR, 2009\nDong, Chao, Loy, Chen Change, He, Kaiming, and Tang, Xiaoou. Image super-resolution using deep convolutional networks. TPAMI, 38(2):295-307, 2016\nFroudarakis, Emmanouil, Berens, Philipp, Ecker, Alexander S, Cotton, R James, Sinz, Fabian I Yatsenko, Dimitri, Saggau, Peter, Bethge, Matthias, and Tolias, Andreas S. Population code ir. mouse V1 facilitates readout of natural scenes through increased sparseness. Nature neuroscience 17(6):851-7, apr 2014. ISSN 1546-1726. doi: 10.1038/nn.3707.\nGatys, Leon A., Ecker, Alexander S., and Bethge, Matthias. Image style transfer using convolutional neural networks. In CVPR, 2016.\nGoodfellow, Ian, Bengio, Yoshua, and Courville, Aaron. Deep learning. Book in preparation for MIT Press, 2016.\nBeck, J. M., Latham, P. E., and Pouget, A. Marginalization in Neural Circuits with Divisive. Normalization. The Journal of neuroscience : the official journal of the Society for Neuroscience 31(43):15310-9, oct 2011. ISSN 1529-2401. doi: 10.1523/JNEUROSCI.1706-11.2011.\nBevilacqua, Marco, Roumy, Aline, Guillemot, Christine, and Morel, Marie-Line Alberi. Low complexity single-image super-resolution based on nonnegative neighbor embedding. In BMVC, 2012.\nHeeger, D. J. Normalization of cell responses in cat striate cortex. Vis Neurosci, 9(2):181-197, 1992 ISSN 09525238.\nHiggins, I., Matthey, L., Glorot, X., Pal, A., Uria, B., Blundell, C., Mohamed, S., and Lerchner, A Early Visual Concept Learning with Unsupervised Deep Learning. CoRR, abs/1606.05579, 2016\nIoffe, Sergey and Szegedy, Christian. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015\nJarrett, K., Kavukcuoglu, K., Ranzato, M. A., and LeCun, Y. What is the best multi-stage architecture for object recognition? ICCV, 2009.\nKavukcuoglu, K., Ranzato, M' A., Fergus, R., and LeCun, Y. Learning invariant features through topographic filter maps. In CVPR Workshops, 2009..\nLaurent, Cesar, Pereyra, Gabriel, Brakel, Philemon, Zhang, Ying, and Bengio, Yoshua. Batch normalized recurrent neural networks. arXiv preprint arXiv:1510.01378, 2015..\nLiao, Q. and Poggio, T. Bridging the Gaps Between Residual Learning, Recurrent Neural Networks and Visual Cortex. CoRR, abs/1604.03640, 2016\nLyu, Siwei and Simoncelli, Eero P. Reducing statistical dependencies in natural signals using radia Gaussianization. NIPS, 2008.\nMalo, J., Epifanio, I., Navarro, R., and Simoncelli, E. P. Nonlinear image representation for efficient perceptual coding. TIP, 15(1):68-80, 2006.\nReynolds, J. H. and Heeger, D. J. The normalization model of attention. Neuron, 61(2):168-85, jar 2009. 1SSN 1097-4199. doi: 10.1016/j.neuron.2009.01.002\nSalimans, Tim and Kingma, Diederik P. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In NIPs, 2016.\nLiao, Qianli, Kawaguchi, Kenji, and Poggio, Tomaso. Streaming Normalization: Towards Simpler and More Biologically-plausible Normalizations for Online and Recurrent Learning. CoRR abs/1610.06160, 2016a\nPinto, N., Cox, D. D., and DiCarlo, J. J. Why is Real-World Visual Object Recognition Hard? PLoS Comput Biol, 4(1):e27, jan 2008. doi: 10.1371/journal.pcbi.0040027.\nRingach, D. L. Population coding under normalization. Vision Research, 50(22):2223-2232, 2009 ISSN 18785646. doi: 10.1016/j.visres.2009.12.007.\nSchwartz, O., J., Sejnowski T., and P., Dayan. Perceptual organization in the tilt illusion. Journal o Vision, 9(4):1-20, apr 2009. ISSN 1534-7362.\nSinz, Fabian and Bethge, Matthias. Temporal Adaptation Enhances Efficient Contrast Gain Contro on Natural Images. PLoS Computational Biology, 9(1):e1002889, jan 2013. ISSN 1553734X.\nSinz, Fabian H and Bethge, Matthias. The Conjoint Effect of Divisive Normalization and Orientation Selectivity on Redundancy Reduction. In NIPs, 2008\nTimofte, Radu, De Smet, Vincent, and Van Gool, Luc. Anchored neighborhood regression for fas example-based super-resolution. In ICCV, 2013.\nSrivastava, Nitish, Hinton, Geoffrey E, Krizhevsky, Alex, Sutskever, Ilya, and Salakhutdinov, Ruslan Dropout: a simple way to prevent neural networks from overfitting. JMLR, 15(1):1929-1958,. 2014.\nZaremba, Wojciech, Sutskever, Ilya, and Vinyals, Oriol. Recurrent neural network regularization CoRR, abs/1409.2329, 2014.\nZeyde, Roman, Elad, Michael, and Protter, Matan. On single image scale-up using sparse representations. In International conference on curves and surfaces, pp. 711-730. Springer 2010."}, {"section_index": "11", "section_name": "EFFECT OF SIGMA AND L1 ON CIFAR-1O/1O0 VALIDATION SET", "section_text": "We plot the effect of and L1 regularization on the validation performance in Figure|6 While sigma makes the most contributions to the improvement, L1 also provides much gain for the original version of LN and BN.\nCIFAR-10 CIFAR-100 CIFAR-10 CIFAR-100 Baseline Baseline BN BN BN_s BN_s LN LN LN_s LN_s DN DN Baseline +L1 Baseline +L1 BN +L1 BN +L1 BN* BN* LN +L1 LN +L1 + LN* + LN* DN* DN* 10-1 100 10-1 100 10-4 10-3 10-2 10-4 10-3 10-2 Sigma Sigma L1 L1 (a) (b) (c) (d)\nFigure 6: Validation accuracy on CIFAR-10/100 showing effect of sigma constant (a, b) and L1 regularization (c, d) on BN, LN, and DN"}, {"section_index": "12", "section_name": "3 LSTM IMPLEMENTATION DETAILS", "section_text": "In LSTM experiments, we found that have an individual normalizer for each non-linearity (sigmoid and tanh) helps the performance for both LN and DN. Eq.[12|14|are the standard LSTM equations and let N be the normalizer function, our new normalizer is replacing the nonlinearity with Eq.|15|16 This modification can also be thought as combining normalization and activation as a single activation function.\nIt it Wnht-1+ WxXt+ b Ot gt Ct o(ft) O ct-1 + (it) O tanh( ht (ot) O tanh(ct) o(x) o(N(x)) tanh(x) tanh(N(x))\nThis is different from the released implementation of LN and BN in LSTM, which separately normalized the concatenated vector W,h-1 and W,x. For all LN* and DN experiments we choose this new formulation, whereas LN experiments are consistent with the released version.\n1t it Wnht-1+ WxXt+ b Ot gt Ct (ft) O ct-1 + o(it) O tanh(gt) ht (ot) O tanh(ct) o(x) o(N(x)) tanh(x) tanh(N(x))\nTable 8: Average test results of PSNR and SSIM on Set5 Dataset\nModel PSNR (x3) SSIM (x3) PSNR (x4) SSIM (x4) Bicubic 30.41 0.8678 28.44 0.8097 A+ 32.59 0.9088 30.28 0.8603 SRCNN 32.83 0.9087 30.52 0.8621 BN 22.85 0.8027 20.71 0.7623 DN* 32.83 0.9106 30.62 0.8665\nPSNR 21.69dB PSNR 22.62dB PSNR 20.06dB PSNR 22.69dB PSNR 31.55dB PSNR 32.29dB PSNR 19.39dB PSNR 32.31dB (a) Bicubic (b) SRCNN (c) BN (d) DN*\nFigure 7: Comparisons at a magnification factor of 4"}] |
ry54RWtxx | [{"section_index": "0", "section_name": "LEARNING A STATIC ANALYZER: A CASE STUDY ON A TOY LANGUAGI", "section_text": "Manzil Zaheer\nCarnegie Mellon University\nStatic analyzers are meta-programs that analyze programs to detect potential er. rors or collect information. For example, they are used as security tools to detec potential buffer overflows. Also, they are used by compilers to verify that a pro. gram is well-formed and collect information to generate better code. In this paper. we address the following question: can a static analyzer be learned from data?. More specifically, can we use deep learning to learn a static analyzer without the need for complicated feature engineering? We show that long short-term mem ory networks are able to learn a basic static analyzer for a simple toy language. However, pre-existing approaches based on feature engineering, hidden Markov models, or basic recurrent neural networks fail on such a simple problem. Finally we show how to make such a tool usable by employing a language model to help. the programmer detect where the reported errors are located.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Can programming language tools, such as static analyzers, be learned from data using deep learning?. While research projects trying to use machine learning to design better programming language tools. are burgeoning, they all rely on feature engineering (Brun & Ernst] 2004) Kolter & Maloof2006 Yamaguchi et al.]2012] Tripp et al.]2014] Raychev et al.]2015]Allamanis et al.2015} Nguyen & Nguyen2015 Gvero & Kuncak]2015 Long & Rinard[2016). Unfortunately, feature engineering for programs is difficult and indeed the features often seem ad-hoc and superficial..\nThis raises the question of whether it could be possible to approach a complicated problem sucl. as static analysis - the automated detection of program properties - from almost raw features. In. this paper, our goal is to present a very simple experiment that clearly shows that not only feature engineering can completely fail for even the simplest static analysis task, but that deep learning witl. neural networks can indeed be successful..\nThe task in which we are interested is simple: we want to ensure that program variables are define. before they are used. We design a toy language to focus on the problem, and indeed our languag is so simple that if it satisfies the aforementioned property, then it is semantically valid. Since programs are sequences of tokens, we experiment with different types of sequence learning method [Xing et al.]2010). We try feature-based methods in which we extract features from the sequenc and then use a classifier to decide whether or not the program is semantically valid. We show tha they all fail, including methods that compute a sequence embedding. Then, we try different model based methods (Lipton]2015): hidden Markov models (HMM), recurrent neural networks (RNN). and long short-term memory networks (LSTM). Our results show that HMM and RNN do poorl (albeit better than random), while an LSTM is almost perfectly accurate. This finding is somewha surprising as static analysis is essentially a document classification problem and LSTMs are knowr to perform poorly on related tasks, such as sentiment analysis (Dai & Le2015)..\nThe obvious question about such an experiment is: why would we want to learn a static analyzer. for a problem that we know of a perfectly fine engineered solution? The answer is that we want. to initiate investigation into the use of deep-learning for program analysis, and our broader hopes are two-fold. First, static analyzers are very complicated and often limited by the amount of false. positive and false negatives they generate. In cases where false negatives are unacceptable, a learned. static analyzer may not be the right approach. But when the goal is rather to find a good balance.\nJean-Baptiste Tristan & Michael Wick & Guy L. Steele J Oracle Labs\njean.baptiste.tristan@oracle.com"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "between false positives and false negatives, learned static analyzers might be more flexible. Second as we will briefly show in the paper, learned static analyzers have a resilience to small errors that might lead to more robust tools. Indeed, even though our goal is to detect errors in syntactically valid programs, our tool work despite the presence of small syntactic errors, such as the omission of the semicolon. This resilience to errors is in our opinion a very promising aspect of learned methods for the analysis of programs.\nAnother key problem with static analysis programs is that to be useful, they need to help the pro grammer understand what is the cause of the error. In that case, models based on recurrent neural networks really shine because they can be trained to provide such information. Indeed, in our ex periment, we show how to use a language model to locate the position of erroneous variables in examples classified by the static analyzer as being wrong. This is very important for practical static analysis since a tool that merely reports the existence of an error in a large code file is not useful.\nOur goal is to study the following static analysis problem: given a program, is every variable definec before it is being used? Because this problem is undecidable for a Turing complete language, pro gramming languages such as Java impose constraints on what is a correct variable initialization For example, a variable may not in general be defined within only one branch of an if-then-else statement and used afterward since it can be impossible to guarantee which branch will be executec for every run.\nIn order to better understand whether this is feasible and which methods work, we design a toy lan guage. As an example, in this language, we can write a program that computes the 42th Fibonnacci. number as follows.\nvO = 1; v1 = 1; v2 = 0; while(v2 < 42) { 4 v3 = v1; 5 v1 = vO + v1; 6 vO = v3; v2 = v2 + 1; 7 } return v1;\nIf we were to invert lines 4 and 6, then not only would the program be incorrect, but it would b semantically invalid since in the first execution of the loop, variable v3 has not yet been defined.\nIn order to precisely explain what the task is, we now briefly present the syntax and semantics of ou. experimental programming language\nWe present the syntax of the language in Backus-Naur form in figure[1 The symbols delimited by are non-terminals while the symbols delimited by ' are terminals. Symbol (program) is the starting non-terminal. A program is composed of an optional statement followed by an expression. The statement can be a list of statements, control-flow statements like conditionals or iterations, or the binding of an expression to a variable. The expressions are simple arithmetic expressions. For simplicity, the test expressions used in conditional statements are distinct from the other expressions which is a simple syntactic way to enforce basic type safety. The integers are simple integer values of the form [0 - 9]+ while the identifiers are of the form v[0 - 9]+\nThe semantics of our experimental programming language is presented as a big-step operational semantics in figure2 For simplicity, we only present a subset of the rules. It is composed of\nThe paper is organized as follows. In section 2|we introduce the programming language of study and the corresponding static analysis task. In section 3] we review how we created the dataset used to learn the static analyzer and the methods that we have tried. In section4] we explain how we learn to report error messages to help the programmer understand how to fix an error..\nFigure 1: Syntax of the language, presented in Backus-Naur form. The symbols delimited by are non-terminals while the symbols delimited by \"' are terminals. (program) is the starting non terminal.\nTF e1=>V1 T F e2=> V2 x ET 2 ADD INT LOOKUP TF e1+ e2=>V1+ V2 TFi=>i TFx=>T(x) TF e1=>U1 TF e2=> v2 V1 = V2 TF e1=> U1 T F e2 => v2 U1 7 TEST1 TEST TFe1=e=>T TFe1=e>F I,s*r' TFe=>v INTRO I,s->I' CLOSURE T,x=e>(x,v)T TFt=>T I,s->r' I', while (t) s-> I# TFt=F WHILE1 I,while (t) s->I\" 0,s->T TFe=>v PROGRAM [s;return e] = v\nTF e1=> V1 rF e2 U2 x E T ADD INT LOOKU rF TFi=i TFx=T(x) e1 + e2 =>V1+ V2\nTF e1=>v1 T F e2=> V2 = U2 TF e1=>V1 TF e2=> v2 V1 # V2 U1 TEST1 TEST2 TF e1=e2=> T TF e1=e2=> F\nI,s*r ITe=>v CLOSURE INTRO T,s-> ] T,x=e->x,v)::I\n0,s->T TTe=>v PROGRAM [s; return e = v\nFigure 2: Semantics of the language, presented as inference rules. The semantics is defined as four predicates formalizing the evaluation of expressions (T F e => v), single statement step (T, s => I). the reflexive and transitive closure of statements (T, s *> I), and the evaluation of the program overall ([p] = v).\nTable 1: Accuracy of different learning algorithms on the static analysis task. LR stands for logistic regression, MLP stands for multilayer perceptron, HMM stands for hidden Markov Model, RNN stands for recurrent neural network, LSTM stands for Long Short-Term Memory."}, {"section_index": "3", "section_name": "2.2 THE TASK", "section_text": "Now that we have presented the language, we can state more precisely the goal of the static analysis A program such as \"v1 = 4; return v1 + v2;\", while syntactically valid, is not well-defined since variable v2 has not been defined. A static analyzer is a function that takes such a program as an input and returns a Boolean value.\nFunction analyze should return t rue only if every variable is defined before it is used. We chose the. input to be the sequence of tokens of the program rather than the raw characters for simplicity. It is easy to define such a function directly, but our goal is to see whether we can learn it from examples. Note that unlike previous work combining static analysis and machine learning, we are not trying to. improve a static analyzer using machine learning, but rather learning the static analyzer completely. from data.\nTo learn the static analyzer, we compile a balanced set of examples in which programs are labeled with a single Boolean value indicating whether the program should be accepted or not.\nThe dataset contains 20o,o00 examples, half of which are valid programs and half of which are invalid programs. The invalid programs are of two forms. Half of them contain variables that have not been defined at all, the other half contains programs where the order of statements has beer swapped and a variable use appears before its definition. Note that this swapping of statement. results in documents that have the exact same bag-of-words, but different labels. Of the 20o,000\nfour predicates. The predicate T F e => v denotes the value v resulting from evaluating e in the. environment T. The environment is simply a list of bindings from variables to their values. We. present four rules that define this predicate, ADD, INT, LOOKUP, TEST1, and TEST2. The most. important is the LOOKUP rule which states that the value of a variable is the value associated to it. in the environment. Note that this is only well-defined if the variable actually is in the environment. otherwise the semantics is undefined. The goal of our static analyzer is to ensure this can never. happen.\nThe predicate T, s => I denotes the execution of a statement that transforms the environment by adding variable bindings to it. For example, the INTRO rule shows that a variable assignment adds a variable binding to the environment, The CLOSURE rule states that a possible transition is the reflexive and transitive execution of a single statement T, s *> T. The rules WHILE1 and WHILE2 formalize the execution of a while loop. Finally, the predicate [p] = v denotes the evaluation of a complete program into a resulting value.\nexamples, we use 150,000 as the training set and 50,000 as the test set while making sure to respec a perfect balance between valid and invalid programs.\nTo create this dataset, we have built our own compiler and example generator for our language. The example generator only produces syntactically valid programs. The programs are generated using a variety of random decisions: for example, when trying to generate a statement, we must decide with what probability we want to choose a variable assignment versus a while loop or another type of statement. We vary the probability to try to avoid producing a dataset with a spurious signal, but this is a very delicate issue. We also try our classifiers on hand-written programs..\nN-grams and classification We attempt to learn the static analyzer using a classic approach of feature engineering followed by classification. We try both unigram and bigrams features and clas. sify the examples using either a linear logistic regression or a non-linear multilayer perceptron. We expect this approach to fail since n-gram features fail to capture statement ordering, and this serves as a test to make sure our dataset does not contain any spurious signal. Indeed, these methods do not perform better than random.\nSequence embedding and classification We also attempt to use an LSTM for our feature engi neering. In this case, we first train an LSTM as language model. Then, for classification, we firs execute our language model on the example program and use the last hidden state as an embedding This embedding is used as an input to both a logistic regression and a multilayer perceptron. Thi approach fails as well and does not perform better than random. It is important to note that we migh also consider using an RNN encoder-decoder to produce the embedding but we leave this for future work.\nSequence classication We tried three model-based approaches to sequence classification. First. we tried to use an HMM trained using the Baum-Welch algorithm. Second, we tried to train vanilla RNN with a cross-entropy loss using stochastic gradient descent (SGD). Third, we tried tc. train an LSTM with cross-entropy loss and SGD. More precisely, we use the variant of SGD knowr. as RMSProp. In both cases we used the Keras framework..\nThese sequence classification approaches perform better than the other approaches. However, the HMM and the RNN still perform poorly. Interestingly, the LSTM can achieve an accuracy of 98.3%. The training of the LSTM is very robust, we did not need to do any complicated parameter search to obtain these results. The false negative rate (i.e. the program is correct but predicted as faulty) is 1.0% and the false positive rate (i.e. the program is faulty but classified as correct) is 2.5%.\nUsing differentiable data structures The key problem in detecting uninitialized variables is to remember which variables have been defined up to some program point. A solution is to employ a set data structure: if we encounter a variable definition, we add the variable to the set; if we encounter a variable use, we test whether that variable is in the set of defined variables. With this in mind, we design a differentiable set data structure to augment an RNN to see if the resulting network can learn (from the training data alone) a policy of how to use the set.\nThe set is represented by a vector f and intended to be used as a bitmap by the network. The intent is that each possible value correspond to a bit in the vector and is set to 1 if the element is in the set and O otherwise. An action a on the set can be either adding an element to the set, or testing if some value is in the set. Values v are indices into the set representation.\nRNN can be a simple Ellman network or LSTM. The architecture is shown in Figure4\nht = RNN([xt,ft-1],ht at = o(Waxt+ba) Vt = softmax(W;2 tanh(Wi1Xt + bi1) + b2 ft = max{ft-1,At* Ut} Pt =(ft,Vt) Yt = 0(Wyht+ UyPt+ b\nht= RNN([xt,ft-1],ht) at = o(Waxt+ba) Vt = softmax(Wi2 tanh(W;1Xt + bi1) + bi2 ft = max{ft-1,t* Vt} Pt =(ft,Vt) Yt =o(Wyht+ UyPt+ by\nv14 = (v14 - 23) v14 = (v14 23) 1 v14 = (14 * 93) ; : : 2 # 1 1 1 0 1 000 2 # 1 1 1 0 1 1 1 1 2#11111111 v14 = (14 * 93) ; v14 (14 * 93) ; 3 = 3 v14 = (v14 - 23) ; 3 4# 0 0 000000 4 # 1 1 1 1 1 11 1 4 # 1 1 1 1 1 1 1 1 return v14 ; return v14 ; 5 return v14 ; 5 5 6 # 0 0 0 6 # 1 1 1 6 # 1 1 1\nv14 = (v14 23) v14 = (v14 23) v14 = (14 * 93) 1 . 1 0 # 0 # 2 # 1 1 1 0 2 1 1 1 1 11 2 1 1 1 1 1 1 1 v14 = (14 * 93) v14 (14 93) v14 (v14 23) 3 * Y 4 # 0 0 0 0 0 0 0 0 4 # 1 11 1 11 4 # 1 1 1 1 1 1 1 5 return v14 5 return v14 return v14 : # 0 0 0 6 # 1 1 1 # 1 6 6 1 1 (a) Classification task. Once an (b) Transduction task. The net- (c) An example for which both error is detected, the rest of the work gets every output right.. classification and transduction outputs is meaningless.. work.\nFigure 3: A look inside the prediction of different networks that use an LSTM and a differentiable. set data structure. The commented line shows the label attached to each variable by the network. A 1 means a variable is properly used while a O means a variable was not initialized..\nUnfortunately, training differentiable data structures is sometimes difficult, requiring extensiv hyper-parameter tuning and cross validation to find a good weight initialization. Further, LSTM are often able to learn the training data single-handedly causing the network to learn a policy tha ignores the data structure. To circumvent these problem, we annotate the training data with addi tional intermediate signals: specifically, we annotate each token with a binary label that is true if an only if the token is a variable use that has not been initialized. Note that the additional labels result in a per-token classification problem, but we convert the network back into a program classifier b employing min-pooling over the per-token soft max outputs. We experiment with both per-progran (sequence classification) and per-token (sequence transduction) classifiers as described next.\nSequence classification: As in previous experiments, we train using the program as the inpu sequence and a single Boolean label to indicate whether the program is valid or not. For the networl with differentiable set to produce one output we apply min-pooling across all the decisions. This method improves over an LSTM and achieves an accuracy of 99.3%. Note that, we did not need t do any complicated parameter search to obtain these results. The false negative rate (i.e. the progran is faulty but classified as correct) is O.8% and the false positive rate (i.e. the program is correct bu predicted as faulty) is 0.6%.\nTo understand the behavior of the network, we remove the last minpooling layer, and look at the decision made by the network for each input token. This reveals an interesting pattern: the network correctly identifies the first error location and subsequently emit incorrect outputs. Thus, it is com- parable to conventional (non-ML) static analysis algorithms that give up after the first error. For example, in the example in figure 3a the first variable use is correctly identified as invalid but the rest of the output is incorrect. information.\nSequence transduction: Finally, we run an experiment at token level granularity. In this case, th etwork produce not just a single output but as many outputs as inputs (many-to-many architecture ve refer to this approach as sequence transduction to distinguish from the recurrent networks tha roduce a single label (many-to-one architecture). The training data also contains the label for eac oken in the program. This can achieve an accuracy of 99.7%. The training of the transduction tasl s very robust, we did not need to do any complicated parameter search to obtain these results. Th alse negative rate is 0.4% and the false positive rate is 0.2%.\nGiven the token level data, it seems that the network has inducted a use of the set data structure that correspond to what an traditional algorithm would do. Aside from using the set to keep track of defined variables, it correctly handles the tricky case of a statement such as v1 = v1 + 3; by. making sure that the variable v1 is introduced in the set only after the statement is finished. For. example, in the example presented in figure|3b] the declaration of the variable v1 4 utilizes the value of the still undeclared variable v1 4 and the network correctly identifies it..\nUnfortunately, and interestingly, the accuracy is not perfect. Even though it looks like the correct use. of the set has been learned, there are a few rare cases where the network makes simple mistakes. Fo. example, some of the errors happen on some the simplest and shortest programs where the network. fails to insert the declared variable into the set..\nft-1 previous ht-1 ht ft-1 next state state Ht-1 Ht ft-1 R Differentiable N value Vt Set N test/ input add Pt output Xt xtft-1 at yt Ot\nFigure 4: Overview of a network utilizing the differentiable set data structure for the task of stati analysis. It consists of a neural controller and a fixed size filter.\nConclusion In conclusion, an out-of-the-box LSTM achieves a promising accuracy on this task and an LSTM equipped with a differentiable set data-structure has an almost perfect accuracy. Inter estingly, none of the other approaches including HMM or RNN, could deliver satisfactory results"}, {"section_index": "4", "section_name": "REPORTING USEFUL ERROR MESSAGES", "section_text": "While the above experiment demonstrates that it is possible to learn an accurate static analyzer; practically, such an analyzer is somewhat useless unless we can also help the programmer locate the potential errors. That is, imagine if a tool reported that there is a potential buffer overflow in your code base without any indication of where the problem is: it would not be of much use\nTherefore we train a second LSTM as a language model over true instances of our programming language. That is, we train the LSTM to predict the next character in the sequence, and for ever character in the sequence, the model provides the probability of observing this specific characte. The idea is that we want to look at all the variable-use in the program and if the probability of thi variable use is below a certain threshold, then we report the use as a potential source of error.\nAs we can see from the examples, the method works well. The first four examples show simple. cases with only two variables. Note that from the perspective of a bag-of-words classifier, these. two programs are identical. Yet the LSTM language model, which takes into account the \"word'. order is able to model them differently. Examples 5-11 are more complicated in that the variables. are used or defined several times. In Example 9, the language model accurately reports the first use. of v2 as incorrect and the second use of v2 as correct. This is a somewhat interesting example as. the incorrect use of v2 is in the definition of v2 itself. In example 10, we can see that the language. model can handle multiple incorrect variable uses; this success crucially depends on the ability of the. language model to recover from the error and still accurately model the remainder of the program. Finally, examples 12 and 13 demonstrate robustness. Despite the fact that these two examples are. syntactically incorrect, the language model correctly reports the semantic errors. The resilience ol. the learned tools to small errors is part of what makes them so promising for program analysis"}, {"section_index": "5", "section_name": "5 RELATED WORK", "section_text": "There is a growing body of work in employing machine learning to improve programming language tools. In such works, machine learning is used to complement the traditional static analysis methods further, they rely on extensive feature engineering. In Brun & Ernst(2004), dynamic analysis is used to extract features that are used to detect latent code errors. In Kolter & Maloof (2006), n\nWe present several such examples in Figure[5] We color a variable-use in blue if its probability is above the threshold and in purple if it is below the threshold and therefore potentially the source of the error.\n1. v1 = 37; v2 = (v1 + 20) ; 2. v1 = 37; v1 = (v2+ 20); 3. v2 = 37; v1 = (v2+ 20); 4.v2 = 37; v2 = (v2+ 20); 5.v2 =37;v2 = (v2+ 20);v3 =(v2+ 40) 6.v2 = 37; v2 = (v2+ 20);v2 = (v3 + 40) 7.v2 = 37; v2 = (v2+ 20);v3 = (v1 + V 1 2 8. v2 = 37; v1 = (v2 + 20); v3 = (v1 + v2 9.v1 = 37;v2 = (v2+ 20); v3 = (v1 + v2 10. v1 = 37; v3 = (v2 + 20); v5 = (v3 + v4 11. v1 = 37; v3 = (v 2 + 20); v5 = (v3 + V 2 12. v1 = 37 v2 = (v1+ 20); 13.v1 = 37 v1 = (v2+ 20);\n1. v1 = 37; v2 = (v1 + 20) ; 2. v1 = 37; v1 = (v2+ 20); 3. v2 = 37; v1 = (v2+ 20); 4. v2 = 37; v2 = (v2+ 20); 5.v2 =37;v2 = (v2+ 20);v3 =(v2+ 40); 6. v2 = 37; v2 = (v2 + 20); v2 = (v3 + 40) ; 7.v2 =37;v2 = (v2+ 20);v3 = (v1 + v2) 8.v2 = 37;v1 = (v2+ 20);v3 = (v1 + v2 9.v1 =37;v2 = (v2+ 20);v3 = (v1+ v2) 10. v1 = 37; v3 = (v2 + 20); v5 = (v3 + V A 11. v1 = 37; v3 = (v2 + 20); v5 = (v3 + V 2 12. v1 = 37 v2 = (v1+ 20); 13. v1 = 37 v1 = (v2+ 20);\nFigure 5: Example of programs annotated with variable usage. The use colored in blue are consid ered to have been properly defined while the use in purple are considered to be faulty. This tool i run when the classifier detects a program error to help the programmer understand what the problen 1 S\ngram features are used to detect viruses in binary code. In Yamaguchi et al.. (2012), parts of th abstract syntax tree of a function is embedded into a vector space to help detect functions similar tc a known faulty one. In Tripp et al.(2014), various lexical and quantitative features about a progran. is used to improve an information analysis and reduce the number of false alarms reported by th tool. In Raychev et al.(2015), dependency networks are used with a conditional random field tc de-obfuscate and type Javascript code. In|Allamanis et al.(2015), the structure of the code is used tc suggest method names. InNguyen & Nguyen (2015), n-grams are used to improve code completior tools. In Gvero & Kuncak(2015), program syntax is used to learn to tool that can generate Java. expressions from free-form queries. In Long & Rinard (2016), a feature extraction algorithm i designed to improve automatic patch generation..\nWe have shown that it is possible to learn a static analyzer from data. Even though the problem we address is particularly simple and on a toy language, it is interesting to note that in our experiments only LSTM networks provided a reasonable enough solution. We have also shown that it is possible to make the static analyzer useful by using a language model to help the programmer understand where to look in the program to find the error.\nOf course, this experiment is very far from any practical tool. First, dealing with more complicated programs involving memory, functions, and modularity should be vastly more complex. Also, our solution is very brittle. For example, in our language, the space of variable names is very restricted it might be much more difficult to deal with normal variable names where a specific variable name could not appear at all in the training dataset.\nFinally, a fundamental issue are false positives, that is, programs that are wrongly classified as being without error. This is a serious problem that may make such a tool risky to use. However, note that there are useful programming language tools that indeed generate false positive. For instance, a tool that report buffer overflows might not catch every error, but it is still useful if it catches some Another possibility is to consider approaches were a result is verified by an external tools. Fo example, in the field of certified compilation, Tristan & Leroy (2008) have shown that it can be acceptable to use an untrusted, potentially bogus, program transformation as long as each use car be formally checked. Also, as exemplified by Gulwani & Necula(20032004] 2005) some static analysis algorithms do trade a small amount of unsoundness for much faster computation, whicl can be necessary when applying programming tools to very large code base."}, {"section_index": "6", "section_name": "REFERENCES", "section_text": "Miltiadis Allamanis, Earl T. Barr, Christian Bird, and Charles Sutton. Suggesting accurate methoc and class names. In Proceedings of the 2015 1Oth Joint Meeting on Foundations of Softwar Engineering, ESEC/FSE 2015, pp. 38-49, New York, NY, USA, 2015. ACM. ISBN 978-1 4503-3675-8. doi: 10.1145/2786805.2786849. URL|http://doi.acm.0rg/10.1145 2786805.2786849\nFan Long and Martin Rinard. Automatic patch generation by learning correct code. In Proceedings of the 43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Lan- guages, POPL '16, pp. 298-312, New York, NY, USA, 2016. ACM. ISBN 978-1-4503-3549 2. doi: 10.1145/2837614.2837617. URL http://doi.acm.0rg/10.1145/2837614. 28 37617\nAnh Tuan Nguyen and Tien N. Nguyen. Graph-based statistical language model for code. In Proceedings of the 37th International Conference on Software Engineering - Volume 1, ICSE '15, pp. 858-868, Piscataway, NJ, USA, 2015. IEEE Press. ISBN 978-1-4799-1934-5. URL http://dl.acm.org/citation.cfm?id=2818754.2818858\nVeselin Raychev, Martin Vechev, and Andreas Krause. Predicting program properties from \"big code\". In Proceedings of the 42Nd Annual ACM SIGPLAN-SIGACT Symposium on Principles oJ Programming Languages, POPL'15, pp. 111-124, New York, NY, USA, 2015. ACM. ISBN 978 1-4503-3300-9. doi: 10.1145/2676726.2677009. URL http://doi.acm.0rg/10.11457 2676726.2677009\nJean-Baptiste Tristan and Xavier Leroy. Formal verification of translation validators: A case study on instruction scheduling optimizations. In Proceedings of the 35th Annual ACM SIGPLAN SIGACT Symposium on Principles of Programming Languages, POPL '08, pp. 17-27, New York, NY, USA, 2008. ACM. ISBN 978-1-59593-689-9. doi: 10.1145/1328438.1328444. URL|http : //do1.acm.0rg/10.1145/1328438.1328444\nFabian Yamaguchi, Markus Lottmann, and Konrad Rieck. Generalized vulnerability extrapola. tion using abstract syntax trees. In Proceedings of the 28th Annual Computer Security Appli cations Conference, ACSAC '12, pp. 359-368, New York, NY, USA, 2012. ACM. ISBN 978 1-4503-1312-4. doi: 10.1145/2420950.2421003. URL http://doi.acm.0rg/10.1145/ 2420950.2421003\nSumit Gulwani and George C. Necula. Precise interprocedural analysis using random interpretation In Proceedings of the 32Nd ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL '05, pp. 324-337, New York, NY, USA, 2005. ACM. ISBN 1-58113-830- X. doi: 10.1145/1040305.1040332. URL http://doi.acm.0rg/10.1145/1040305. 10 4 0 3 32"}] |
Hku9NK5lx | [{"section_index": "0", "section_name": "TRAINING COMPRESSED FULLY-CONNECTED NET WORKS WITH A DENSITY-DIVERSITY PENALTY", "section_text": "Haoran Cai\nShengjie Wang\nShengjie Wang. Department of CSE. University of Washington wangsj @ cs. washington.ed\nDeep models have achieved great success on a variety of challenging tasks. How. ever, the models that achieve great performance often have an enormous number. of parameters, leading to correspondingly great demands on both computationa. and memory resources, especially for fully-connected layers. In this work, we propose a new \"density-diversity penalty\" regularizer that can be applied to fully. connected layers of neural networks during training. We show that using this regularizer results in significantly fewer parameters (i.e., high sparsity), and alsc significantly fewer distinct values (i.e., low diversity), so that the trained weight matrices can be highly compressed without any appreciable loss in performance The resulting trained models can hence reside on computational platforms (e.g.. portables, Internet-of-Things devices) where it otherwise would be prohibitive.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Deep neural networks have achieved great success on a variety of challenging data science tasks (Krizhevsky et al.||2012}Hinton et al.|2012} Simonyan & Zisserman2014b]Bahdanau et al.2014 Mnih et al.[[2015][Silver et al.[[2016f[Sutskever et al.[[2014). However, the models that have achieved this success have a very large number of parameters, a consequence of their wide and deep archi- tectures. Although such models yield great performance benefits, the corresponding memory and computational costs are high, making such models inaccessible to lightweight architectures (e.g., portable devices, Internet-of-Things devices, etc.). In such settings, the deployment of neural net- works offers tremendous potential to produce novel applications, yet the modern top-performing networks are often infeasible on these platforms.\nFully-connected layers and convolutional layers are the two most commonly used neural networ structures. While networks that consist of convolutional layers, are particularly good for visio. tasks, the fully-connected layers, even if they are in the minority, are responsible for the majorit of the parameters. For example, the VGG-16 network (Simonyan & Zisserman2014a) has 1 convolutional layers and 3 fully-connected layers, but the parameters for 13 convolutional laye contain only ~ 1/9 of the parameters of the three fully-connected layers. Moreover, for genera. tasks, convolutional layers may be not applicable (data might be one-dimensional, or there mig). be no local correlation among data dimensions). Therefore, compression of fully-connected layei. is critical for reducing the memory and computational cost of neural networks in general..\nWe can use the characteristics of convolutional layers to reduce the memory and computational cost of fully-connected layers. Convolution is a special form of matrix multiplication, where the weights of the matrix are shared according to the convolution structure (low diversity), and most entries of the weight matrix are zeros (high sparsity). Both of these properties greatly reduce the information capacity of the weight matrix.\nHaoranCal Department of Statistics University of Washington haoran @uw.edu.\nbines Department of EE, CSE University of Washington bilmes @uw.edu"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In this paper, we propose a density-diversity penalty regularization, which encourages low diversity and high sparsity on fully-connected layers. The method uses a pairwise L1 loss of weight matrices to the training objective. Moreover, we propose a novel \"sorting trick\"' to efficiently optimize the density-diversity penalty, which would otherwise would be very difficult to optimize and would slow down training significantly. When initializing the weights to have a small portion of them to be zero, the density-diversity penalty effectively increases the sparsity of the trained weight matrices in addition to reducing the diversity, generating highly compressible weight matrices.\nFor this paper, we focus on reducing the number of actual parameters in fully-connected layers by. using the density-diversity penalty to encourage sparsity and penalize diversity. Several previous. studies have focused on the task of reducing the complexity of deep neural network weights\nNowlan & Hinton(1992) introduced a regularization method to enforce weight sharing by modeling the distribution of weight values as a Gaussian mixture model. This approach is in the same spirit as our method, because both methods apply regularization during training to reduce the complexity of weights. However, the Nowlan et al. approach focuses more on the generalization of the network rather than the compression. Moreover, their approach involves explicitly clustering weights to group them, while in our approach, weights are grouped by the regularizer directly.\nThe \"optimal brain damage' (LeCun et al.] 1989) and \"optimal brain surgeon\" (Hassibi & Stork 1993) methods are two approaches for pruning weight connections of neural networks based on information about the second order derivatives. Interestingly, Ciresan et al.(2011) showed that dropping weights randomly can lead to better performance. All three approaches focus more on pruning unnecessary weights of the network (increasing sparsity) rather than grouping parameters (decreasing diversity), while our approach addresses both tasks with a single regularizer.\nChen et al.(2015b) proposed a hashing trick to randomly group connection weights into hash buck ets, so that the weight entries in the same hash bucket are tied to be a single parameter value. Such approaches force low diversity among weight matrix entries, and the grouping of weight entries are determined by the hash functions prior to training. In contrast, our method learns to tie the parameters during training.\nHan et al.(2015b) first proposed a method to learn both weights and connections for neural net-. works by iteratively pruning out low-valued entries in the weight matrices. Based on that idea, they. proposed a three stage pipeline (Han et al.]2015a): pruning low-valued weights, quantizing weights. through k-means clustering, and Huffman coding. The resulting networks are highly compressed. due to both sparsity (pruning) and diversity (quantization). The quantization of weights is applied to. well-trained models with pruned weights, whereas our method prunes and quantizes weights simul- taneously during training. Our approach is most relevant to that of Han et al., and we achieve compa-. rable or better results. In some sense, it may be argued that our approach generalizes on that of Han. et al. because the hyperparameter controlling the strength of the density-diversity penalty can be. adaptively increased during training (although in the present paper, we keep it fixed during training).\nSuppose we are given a deep neural network of the following form\nwhere W, represents the weight matrix of layer j, is a non-linear transfer function, x denotes the input data, and y and y denote the true and estimated labels for x, respectively. Suppose weight matrix W; has order (ri, Cj).\nOn two separate tasks, computer vision and speech recognition, we demonstrate that the proposed density-diversity penalty significantly reduces the diversity and increases the sparsity of the models. while keeping the performance almost unchanged..\ny = Wm$(Wm-1$(...$(Wx)))\nLet the objective of the deep neural network be min L(y, y), where L() is the loss function. W propose the following optimization to encourage low density and low diversity in weight matrices:.\nm min L(y,y) +> lWj(a,b)-Wj(a',b)|+|Wjp j=1 a=1:rj,a'=1:rj, b=1:cj b'=1:Cj\nwhere W,(a, b) denotes the entry of W, in the ath row and bth column, X; > 0 is a hyperparameter and : lp is a matrix p-norm (e.g., p = 2 gives the Frobenius norm, or p = 1 is a sparsity encouraging norm). We denote the density-diversity penalty for each W, as DP(W)\nFor simplicity, suppose we assign the subgradient of each L1 term at zero to be zero, which is typical in certain widely-used neural network toolkits such as Theano (Theano Development Team 2016) and mxnet (Chen et al.]2015a). With a sorting trick, shown in Alg[1] we can greatly reduce the computational cost of calculating the gradients of density-diversity penalty from O(r?c?) down\nJ ^j,7j aDP(Wj) put: aWj flatten(Wj) ; sort_index(W) ; num greater(I); (Wj) reshape( * (I - Ij), (rj, Cj) W j aDP(Wj) Irn aWj\nIn the algorithm, flatten(W) transforms the matrix Wj, which is of order (r, cj), into a vector. of length r; * c, and sort_index() outputs the sorted indices (in ascending order) of the input. vector, e.g. sort_index((3,2,1, 2,3,4)) = (3,1, 0,1, 3,5). In other words, entry i in the sorted. indices is the number of entries in the original vector smaller than the ith entry. Also note that the entries with the same value have the same sorted index. Correspondingly, num greater() out. puts the number of elements greater than a certain entry based on the sorted index. For example num greater((3, 1, 0, 1, 3, 5)) = (1, 3, 5, 3, 1, 0). Finally, reshape() transforms the input vector into. a matrix of the given shape\nThe computational cost of the sorting trick is dominated by the sorting step I; = sort_index(W) which is of complexity O(r;c(log r.. + log c)). We show that the sorting trick outputs the correct\nIn general, for weight matrix W;, the proposed density-diversity penalty resembles the pairwise L1 difference over all entries of W,. Intuitively, such regularization forces the entries of W, to collapse into the same values, not just similar values. The regularizer thus reduces the diversity of W, significantly. The diversity penalty, therefore, is a form of total variation penalty (Rudin et al.|[1992) but where the pattern on which total variation is measured is not between neighboring elements (as in computer vision (Chambolle & Lions! 1997)) but rather globally among all pairs of all elements in the matrix.\nThough the hyperparameter A, can be tuned for each layer, in practice, we only tune one , for layer j, and for all layers j' # j, we set A,, = (j'c' )j, because the magnitude of the density-diversity. r i C penalty is directly correlated with the number of entries in the weight matrix..\nWhile the gradient of the p-norm part of the density-diversity penalty is easy to calculate, computing. the gradient of the diversity part of the density-diversity penalty is expensive: naive evaluation costs O(r?c?) for a weight matrix W, of order (rj, c;). If we suppose r; = 2000 and c; = 2000, which. is common for modern deep neural networks, then the computational cost of the density-diversity penalty becomes roughly (2000)4 = 1.6 1013, which is intractable even on modern GPUs.\ngradient in the following equation\na'=1:r;,b'=1:c VVi aW;(a,b) dW;(a,b) a|W;(a,b)- Wj(a',b')| a|W,(a,b)- W;(a',b) dWj(a,b) 8Wj(a,b) Wj(a,b)>Wj(a',b') Wj(a,b)<Wj(a',b') 1 + -1 Wj(a,b)>Wj(a',b') Wj(a,b)<Wj(a',b') I(arj+b)-Ij(arj+b)\nwhere I,(ar; + b) corresponds to the (ar; + b)th entry of I, which is the ath row and bth column of the matrix formed by reshaping I, into (rj, Cj).\nIntuitively, aDP(Wj) requires counting the number of entries in W, with values greater than. aWj (a,b) W,(a, b), and the number of entries less than W,(a, b). By sorting entries of W;, we get I,(ar; + b and I'(ar; + b) for all pairs of (a, b) collectively; therefore, we can easily calculate the gradient for the density-diversity penalty\nAlthough the sorting trick is efficient for calculating the gradient for the density-diversity penalty depending on the size of each weight matrix, the computational cost can still be high. In practice, to further reduce computational cost, for every mini-batch, we only apply the density-diversity penalt with a certain small probability (e.g. 1% to 5%). This approach still effectively forces the value of weight matrices to collapse, while the increase in the training time is not significant. For ou implementation, to accelerate collapsing of weight entries, we truncate the weight matrix entries tc have a limited number of decimal digits (e.g. 6), in which case entries with very small differences (e.g. 1e - 6) are considered to be the same value."}, {"section_index": "3", "section_name": "3.2 ENCOURAGING SPARSITY", "section_text": "The density-diversity penalty forces entries of a weight matrix to collapse into the same value. yet sparsity is not explicitly enforced. To encourage sparsity in the weight matrices, we randomly. initialize every weight matrix with 10% sparsity (i.e., 90% of weight matrix entries are non-zero) Thereafter, every time we apply density-diversity penalty, we subsequently set the weight matrix. value corresponding to the modal value to be zero. Because the value zero is almost always th most frequent value in the weight matrix (owing to the p-norm), weights are encouraged to stay a zero when using this method, because the density-diversity penalty encourages weights to collapse into same values. Our sparse initialization approach thus complements any sparsity-encouraging. property of the p-norm part of the density-diversity penalty.."}, {"section_index": "4", "section_name": "3.3 COMPRESSION WITH LOW DIVERSITY AND HIGH SPARSITY", "section_text": "Low diversity and high sparsity both can significantly reduce the number of bits required to encode. the weight matrices of the trained network. Specifically, for low diversity, considering the weigh matrix with d distinct entries, we only need [log2d bits to encode each entry, in contrast to 32-bi. floating point values for the original, uncompressed model. High sparsity facilitates encoding the. weight matrices in a standard, sparse matrix representation using value and position pairs. Therefore for a weight matrix in which s entries are not equal to the modal value of the matrix, 2s+ min(rj, Cj. entries are required for encoding, where the min(rj, c) part is for indicating the row or columr. index, depending on whether compressed sparse row or column form is used. We note that furthe. compression is possible: e.g., by encoding sparsity with values and increments in positions insteac. of absolute positions, so that the increments are often small values which require less bits to encode Huffman Coding can also further be applied in the final step, as is done in (Han et al.|2015a), bu. we do not report this method in our results.."}, {"section_index": "5", "section_name": "3.4 TYING THE WEIGHTS", "section_text": "Because the density-diversity penalty collapses weight matrix entries into same values, we tie the weights together so that for every distinct value v of weight matrix W;, the entries that are equal to v are updated with the average of their gradients..\nIn practice, we design the training procedure to alternate between learning with the density-diversity penalty and learning with tied weights. During the phase of learning with the density-diversity penalty, we apply the density-diversity penalty with untied weights. This approach greatly reduces the diversity of the weight matrix entries. However, the performance of the resulting model could be inferior to that of the original model because this approach does not optimize the loss function directly. Therefore, for the phase of learning with tied weights, we train the network without the density-diversity penalty, but we tie the entries of the weight matrices according to the pattern learned from the previous phase. In this way, the network's performance improves while the diversity patterns of the weight matrices are unchanged. We also note that during the phase of learning with tied weights, the sparsity pattern is also fixed, because it was learned in the previous density-diversity pattern learning phase. We show the full procedure of model training with the density-diversity penalty in Figure1\nFigure 1: Pipeline for compressing networks using the density-diversity penalty. Initializatior with low sparsity encourages the weights to collapse, as enforced by the density-diversity penalty Training with the density-diversity penalty greatly compresses the network by increasing sparsity and decreasing diversity, but the resulting performance can be suboptimal. Training with the tied weights boosts the performance so that we obtain a highly compressed model with the same perfor mance as the original model.\nAn alternative approach would be to train the model with tied weights and use the density-diversity penalty simultaneously. However, because the diversity pattern, which controls the tying of the weights, changes rapidly across mini-batches, the weights would need to be re-tied frequently based on the latest diversity pattern. This approach would therefore be computationally expensive. Instead, we choose to alternate between applying the density-diversity penalty and training tied weights. In practice, we train each phase for 5 to 10 epochs."}, {"section_index": "6", "section_name": "4 RESULTS", "section_text": "We apply the density-diversity penalty (with p = 2 for now) to the fully-connected layers of the models on both the MNIST (computer vision) and TIMIT (speech recognition) datasets, and get significantly sparser and less diverse layer weights. This approach yields dramatic compression of models, whose original sizes are already quite conservative, while keeping the performance un- changed. For our implementation, we start with the mxnet (Chen et al.[[2015a) package, which we modified by changing the weight updating code to include our density-diversity penalty.\nTrain with Tie Weights Based Train without Initialize with Diversity on Diversity Diversity Penalty but Low Sparsity Penalty and Pattern with tied Weights untied weights Improved Compression Keep Performance the Same\nWe note that the three key components of our algorithm, namely the sparse initialization, the regular- ization, and the weight tying, contribute to highly compressed network only when they are applied jointly. Independently applying any of the key component would result in inferior results (we stated developing our approach without the sparse initialization and weight tying and got worse compres- sion results) as a good compression requires low diversity, which is achieved by applying the density diversity penalty, high sparsity, which is achieved by applying both the density-diversity penalty and the sparse initialization, and no loss of performance, which is achieved by training with weight tying\nTo evaluate the effectiveness of the density-diversity penalty, we report the diversity and sparsity of. the trained weight matrices. We define \"diversity'' to be number of distinct values divided by the total number of entries in the weight matrix , \"sparsity\"' to be number of entries with the modal value divided by the total number of entries, and \"density\"' to be 1 - sparsity. Based on the observed diver-.\nTable 1: Compression for LeNet-300-100 on MNIST, comparing model trained with density diversity penalty (DP) and \"deep compression\"' method (DC).\nsity and sparsity, we can estimate the compression rate of the model. For weight matrix Wj, suppose bits required to encode the value and position, respectively, in the sparse matrix representation Thus, we have\nwhere p is the number of bits required to encode the weight matrix entries used in the original model and we choose p = 32 as used in most modern neural networks."}, {"section_index": "7", "section_name": "4.1 MNIST DATASET", "section_text": "We choose LeNet (LeCun et al.]1998) as the model to compress, because LeNet performs well on MNIST while having a restricted size, which makes compression hard. Specifically, we test on LeNet-300-100, which consists of two hidden layers, with 300 and 100 hidden units respectively. as well as LeNet-5, which contains two convolutional layers and two fully connected layers. Note that for LeNet-5, we only apply the density-diversity penalty on the fully connected layers. For optimization, we use SGD with momentum.\nWe report the diversity and sparsity of each layer of the LeNet-300-100 model trained with the diversity penalty in Table[1 The overall compression rate for the LeNet-300-100 model is 32.43X using 10 bits to encode both value and index of the sparse matrix representation (the number of bits are based on the number of distinct values in the trained weight matrices). The error rate of the compressed model is 1.62%, while the error rate of the original model is 1.64%; thus, we obtain a highly compressed model without loss of performance. Compared to the state-of-the-art \"deep compression\"' result (Han et al.] 2015b), which achieves roughly 32 times compression rate (without applying Huffman Coding in the end), our method overall achieves a better compression rate. We also note that \"deep compression\"' uses a more complex sparsity matrix representation, so that indices of values are encoded with many fewer bits.\nIn Table[2] we show the per-layer diversity and sparsity of the LeNet-5 convolutional model applied with the density-diversity penalty. For such a model, the overall compression rate is 15.78X. When considering only the fully-connected layers of the model, where most parameters reside and the\nLayer # Weights DP Density DC Density DP Diversity DC Diversity fc1 235K 0.025 0.08 0.0031 0.0003 fc2 30K 0.11 0.09 0.017 0.0021 fc3 1K 0.8 0.26 0.77 0.064 Overall 266K 0.037 0.08 0.018 0.0007\nLayer # Weights DP Density DC Density DP Diversity DC Diversity conv1 0.5K 1.0 0.66 1.0 0.51 conv2 25K 1.0 0.12 1.0 0.010 fc1 400K 0.0034 0.08 0.0017 0.0002 fc2 5K 0.048 0.19 0.042 0.013 Overall FC 405K 0.0039 0.08 0.0022 0.0003 Overall 431K 0.063 0.08 0.061 0.0014\nTable 2: Compression for LeNet-5 on MNIST, comparing model trained with density-diversity penalty (DP) and \"deep compression'' method (DC). The Overall FC row reports the overall statistics for the fully-connected layers only, where density-diversity penalty is applied.\nrj Cj P\nThe MNIST dataset consists of hand-written digits, containing 60000 training data points and 10000. test data points. We further sequester 1oooo data points from the training data to be used as the validation set for parameter tuning. Each data point is of size 28 28 = 784 dimensions, and there are 10 classes of labels.\nTable 3: Compression statistics for 3-2048 fully-connected network on TIMIT dataset, comparing. model trained with density-diversity penalty (DP) and \"deep compression'' method (DC).\ndensity-diversity penalty applies, the compression rate is 226.32X, using 9 bits for both value and. index for sparse matrix representation. The error rate for the compressed model is 0.93%, which is comparable to the O.88% error rate of the original model. Compared to the \"deep compression'. method (without Huffman Coding), which gives 33 times compression for the entire model, and 36. times compression for the fully-connected layers only, our approach achieves better results on the. fully-connected layers."}, {"section_index": "8", "section_name": "4.2 TIMIT DATASET", "section_text": "The TIMIT dataset is for a speech recognition task. The dataset consists of a 462 speaker trainin set, a 50 speaker validation set, and a 24 speaker test set. Fifteen frames are grouped together a inputs, where each frame contains 40 log mel filterbank coefficients plus energy, along with thei. first and second temporal derivatives (Mohamed et al.[2012). Overall, there are 1.1M training dat samples, 120k validation samples, and 50k test samples. We use a window of size 15 7 of eacl frame, so that each data point has 1845 dimensions. Each dimension is normalized by subtractin the mean and dividing by the standard deviation. The label vector has 183 dimensions, consisting o. three states for each of the 61 phonemes. For decoding, we use a bigram language model (Mohame. et al.[2012), and the 61 phonemes are mapped to 39 classes as done in (Lee & Hon|1989) and as i quite standard.\nWe choose the model used in (Mohamed et al.2012) and (Ba & Caruana 2014) as the target foj compression. In particular, the model contains three hidden fully-connected layers, each of whicl has 2048 hidden units. We choose ReLU as the activation function and AdaGrad (Duchi et al. 2011) for optimization, which performs the best on the original models without the density-diversity penalty.\nTable [3|shows the per-layer diversity and sparsity for both the density-diversity penalty and \"deep compression\"' applied to the 3-2048 fully-connected model on TIMIT dataset. We train the original,. uncompressed model and observe a 23.30% phone error rate on the core test set. With our best effort of tuning parameters for the \"deep compression' method, we get 23.35% phone error rate. and a compression rate of 19.47X, using 64 cluster centers for the k-means quantization step. For our density-diversity penalty regularization, we get 21.45X compression and 23.25% phone error rate with 11 digits for value encoding and 11 digits for position encoding for the sparse matrix. representation."}, {"section_index": "9", "section_name": "5 DISCUSSION", "section_text": "On both the MNIST and TIMIT datasets, compared to the \"deep compression\"' method (Han et al 2015b), the density-diversity penalty achieves comparable or even better compression rates on fully connected layers. Another advantage offered by the density-diversity penalty approach is that, rathe than learning the sparsity pattern and diversity pattern separately as done in the \"deep compression method, the density-diversity penalty enforces high sparsity and low diversity simultaneously, whicl greatly reduces the effort involved in tuning parameters.\nLayer # Weights DP Density DC Density DP Diversity DC Diversity fc1 3778K 0.037 0.12 0.0004 1.6e-5 fc2 4194K 0.064 0.13 0.0003 1.5e-5 fc3 4194K 0.080 0.14 0.0004 1.5e-5 fc4 251K 0.35 0.25 0.012 0.0002 Overall 12417K 0.0947 0.1936 0.0007 1.9e-5\nWe visualize a part of the weight matrix either trained with or without the density-diversity penalty. in Figure 2] We clearly observe that the weight matrix trained with the density-diversity penalty. has significantly more commonality amongst entry values. In addition, the histogram of the entry values comparing the two weight matrices (Figure 3) shows that the weight matrix trained with. density-diversity penalty has much less variance in the entry values. Both figures show that density. diversity penalty effectively makes the weight matrix extremely compressed..\n0.16 0 100 200 300 400 500 0 100 200 300 400 500 0.32 0 0 0.12 0.24 0.08 100 100 0.16 0.04 200 200 0.08 ourpnt 0.00 ourpnr 0.00 300 0.04 300 -0.08 -0.08 400 400 0.16 0.12 0.24 500 0.16 500 input input 0.32\nFigure 2: Visualization of the first 500 rows and 500 columns of the first layer weight matrix (shap 2048 X 1845) of the TIMIT 3-2048 model, comparing training either with (left) or without density diversity penalty (right)\n0.16 0.06 0.14 0.05 0.12 0.04 0.10 rrenneeey 0.08 0.03 frre 0.06 0.02 0.04 0.01 0.02 0.00 0.00 0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 -0.3 -0.2 0.1 0.0 0.1 0.2 0.3 values values\n0.14 0.12 0.10 0.08 0.06 0.04 0.02 0.00 -0.3\nFigure 3: Histogram of the entries of the first layer weight matrix (shape 2048 X 1845) of the TIMIT 3-2048 model with zero entries removed, comparing training either with (left) or without. density-diversity penalty (right).\nSpecifically, comparing the diversity and sparsity of the trained matrices using the two compressior. methods, we find that the density-diversity penalty achieves higher sparsity but more diversity thar. the \"deep compression' method. The \"deep compression'' method has two separate phases, where for the first phase, the sparsity pattern is learned by pruning away low value entries, and for the. second phase, k-means clustering is applied to quantize the matrix entries into a chosen number o1. clusters (e.g., 64), thus generating weight matrices with very low diversity. In contrast, the density. diversity penalty acts as a regularizer, enforcing low diversity and high sparsity simultaneously anc. during training, so that the diversity and sparsity of the trained matrices are more balanced.."}, {"section_index": "10", "section_name": "6 CONCLUSION", "section_text": "In future work, we plan to apply the density-diversity penalty to recurrent models, extend the density-. diversity penalty to convolutional layers, and test other values of p. Moreover, besides pairwise L1. loss for the diversity portion of the density-diversity penalty, we will investigate other forms of. regularizations to reduce the diversity of the trained weight matrices (e.g., other forms of structured. convex norms). Throughout this work, we have focused on the compression task, but the learned sparsity/diversity pattern of the trained weight matrices is also worth exploring further. For image. and speech data, we know that we can use the convolutional structure to improve performance,. whereas for other very different forms of data, where we have no prior knowledge about the structure. (i.e., patterns of locality), the density-diversity penalty may be applied to discover the underlying. hidden pattern of the data and to achieve improved results..\nIn this work, we introduce density-diversity penalty as a regularization on the fully-connected lay- ers of deep neural networks to encourage high sparsity and low diversity pattern in the trained weight matrices. To efficiently optimize the density-diversity penalty, we propose a \"sorting trick\" to make the density-diversity penalty computationally feasible. On the MNIST and TIMIT datasets, networks trained with the density-diversity penalty achieve 20X to 200X compression rate on fully- connected layers, while keeping the performance comparable to that of the original model."}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointl learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.\nAntonin Chambolle and Pierre-Louis Lions. Image recovery via total variation minimization and related problems. Numerische Mathematik. 76(2):167-188. 1997.\nTianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu Chiyuan Zhang, and Zheng Zhang. Mxnet: A flexible and efficient machine learning library fo. heterogeneous distributed systems. arXiv preprint arXiv:1512.01274, 2015a.\nJohn Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121-2159, 2011\nSong Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. CoRR, abs/1510.00149, 2, 2015a\nBabak Hassibi and David G Stork. Second order derivatives for network pruning: Optimal brai surgeon. Morgan Kaufmann, 1993\nGeoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly. Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. Signal. Processing Magazine, IEEE, 29(6):82-97, 2012\nYann LeCun, John S Denker, Sara A Solla, Richard E Howard, and Lawrence D Jackel. Optima brain damage. In NIPs, volume 2, pp. 598-605. 1989\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998\nK-F Lee and H-W Hon. Speaker-independent phone recognition using hidden markov models. IEEE Transactions on Acoustics. Speech, and Signal Processing. 37(11):1641-1648. 1989\nWenlin Chen, James T Wilson, Stephen Tyree, Kilian Q Weinberger, and Yixin Chen. Compressing neural networks with the hashing trick. CoRR, abs/1504.04788, 2015b\nSong Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In Advances in Neural Information Processing Systems, pp. 1135-1143. 2015b.\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo. lutional neural networks. In Advances in neural information processing systems, pp. 1097-1105, 2012.\nVolodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Belle mare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529-533, 2015.\nDavid Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George van den Driessche. Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering. the game of go with deep neural networks and tree search. Nature. 529(7587):484-489. 2016\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014b\nIlya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks .3104-3112.2014"}] |
B1M8JF9xx | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "In recent years, deep generative models have dramatically pushed forward the state-of-the-art in generative modelling by generating convincing samples of images (Radford et al.]2016), achieving. state-of-the-art semi-supervised learning results (Salimans et al.. 2016), and enabling automatic image manipulation (Zhu et al.]2016). Many of the most successful approaches are defined in. terms of a process which samples latent variables from a simple fixed distribution (such as Gaussian or uniform) and then applies a learned deterministic mapping which we will refer to as a decoder network. Important examples include variational autoencoders (VAEs) (Kingma & Welling2014 Rezende et al.J2014), generative adversarial networks (GANs) (Goodfellow et al.2014), generative moment matching networks (GMMNs) (Li & Swersky2015) Dziugaite et al.2015), and nonlinear independent components estimation (Dinh et al.f 2014). We refer to this set of models collectively as decoder-based models, also known as density networks (MacKay & Gibbsl1998).\nWhile many decoder-based models are able to produce convincing samples (Denton et al. 2015 Radford et al.|2016), rigorous evaluation remains a challenge. Comparing models by inspecting samples is labor-intensive, and potentially misleading (Theis et al.[2016). While alternative quantita tive criteria have been proposed (Bounliphone et al.]2016f Im et al.[|2016] Salimans et al.2016) log-likelihood of held-out test data remains one of the most important measures of a generative model's performance. Unfortunately, unless the decoder is designed to be reversible (Dinh et al. 2014]2016), log-likelihood estimation in decoder-based models is typically intractable. In the case of VAE-based models, a learned encoder network gives a tractable lower bound, but for GANs and GMMNs it is not obvious how even to compute a good lower bound. Even when lower bounds are available. their accuracy may be hard to determine. Because of the difficulty of log-likelihood"}, {"section_index": "1", "section_name": "ON THE E OUANTITATIVE ANALYSIS OF DECODER BASED GENERATIVE MODELS", "section_text": "Ruslan Salakhutdinoy\nRuslan Salakhutdinoy School of Computer Science Carnegie Mellon University\nyburda@openai.com"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "3 9 0 0 7 t 4 3. 9 0 1 q 8 / 6 9 0 1 0 6 % 4 / 5 6 0 - 1 5 / 1 3 t 7 3 0 3 8 1 7 0 9 3 7 8 8 / / 7 0 9 9 1 S . 8 8 0 7 3 ) 8 6 1 3 4 7 6 0 q 6 3 9 0 8 4 5 U t. 4 s 9 8 a 9 - 0 0 1 ! 7 3 3 5 5 6 1 6 6 0 8 8 - 8 2 8 3 7 N 0 8 1 0 7 5 4 9 / 1 9 8 8 6 9 1 7 7 1 7 q 0 3 0 1 3 0 1 0 0 / 9 1 6 4 8 / 4 1 9 1 4 1 9 6 7 6 1 c 1 6 4 ! 0 9 1 6 1 / 0 1 5 6 0 1 9 7 1 4 3 7 b / 9 1 8 0 1 / / 8 3 1 9 6 1 3 3 4. 9 3 3 4 / 5 0 9 4 5 3 8 9 / 0 y 9 4\nFigure 1: (a) samples from a GAN with 10 latent dimensions, (b) and (c) samples from a GAN with 50 latent. dimensions at different epochs of training. While it is difficult to visually discern differences between these three models, their log-likelihood (LLD) values span almost 300 nats..\nevaluation, it is hard to answer basic questions such as whether the networks are simply memorizing. training examples, or whether they are missing important modes of the data distribution.\nIn this paper, we propose to use annealed importance sampling (AIS; (Neal]2001) to estimat. log-likelihoods of decoder-based generative models and to obtain approximate posterior sample. Importantly, we validate this approach using Bidirectional Monte Carlo (BDMC) (Grosse et al. 2015), which provably bounds the log-likelihood estimation error and the KL divergence from th. true posterior distribution for data simulated from a model. For most models we consider, we fin that AIS is two orders of magnitude more accurate than KDE, and is accurate enough to perforn fine-grained comparisons between generative models. In the case of VAEs, we show that AIS can b. further sped up by using the recognition network to determine the initial distribution; this yields al. estimator which is fast enough to be run repeatedly during training..\nUsing the proposed method, we analyze several scientific questions central to understanding decoder. based generative models. First, we measure the accuracy of KDE and of the importance weighting bound which is commonly used to evaluate VAEs. We find that the KDE error is larger than the (quite significant) log-likelihood differences between different models, and that KDE can lead to misleading conclusions. The importance weighted bound, while reasonably accurate, can also yield misleading results in some cases.\nSecond, we compare the log-likelihoods of VAEs, GANs, and GMMNs, and find that VAEs achieve log-likelihoods several hundred nats higher than the other models (even though KDE considers all three models to have roughly the same log-likelihood). Third, we analyze the degree of overfitting in. VAEs, GANs, and GMMNs. Contrary to a commonly proposed hypothesis, we find that GANs and. GMMNs are not simply memorizing their training data; in fact, their log-likelihood gaps between. training and test data are much smaller relative to comparably-sized VAEs. Finally, by visualizing. (approximate) posterior samples obtained from AIS, we observe that GANs miss important modes of the data distribution, even ones which are represented in the training data..\nWe emphasize that none of the above phenomena can be measured using KDE or the importance weighted bound, or by inspecting samples. (See Fig.|1|for an example where it is tricky to compare. models based on samples.) While log-likelihood is by no means a perfect measure, we find that the ability to accurately estimate log-likelihoods of decoder-based models yields crucial insight into thei behavior and suggests directions for improving them..\n3 1 0 0 7 4 9 3. 9 0 0 q 8 0 / / 6 9 0 - 0 6 8 4 / 5 6 1 5 - 1 3 3 0 3 8 7 7 0 9 8 3 ~ 8 8 / 7 0 9 1 3 1 8 8 0 7 8 9 1 3 4 7 6 6 6 1 3 1 9 1 0 4 + 3 5 4 s 8 1 9 0 ? 3 6 3 9 5 5 8 6 6 0 8 $ 8 1 1 / 8 2 8 3 7 7 2 0 6 8 1 0 7 4 9 / 3 ! q 9 8 6 9 C / 7 7 q 0 3 1 3 0 1 9 0 / / 0 9 1 4 4 3 / 0 9 1 4 1 b ! 9 ~ 0 1 c 6 4 1 0 6 6 / 0 1 5 0 7 4 3 7 / 8 9 1. 8 0 8 3 9 3 3 ) O 9 3 3 4 / 5 0 9 4 5 3 8 / 0 3 y 9 4 (a) GAN-10; LLD: 328.7 (b) GAN-50, epoch 200; LLD: 543.5 (c) GAN-50, epoch 1000; LLD: 625.5\nThe most widely used estimator of log-likelihood for GANs and GMMNs is the Kernel Density Estimator (KDE) (Parzen|1962). It estimates the likelihood under an approximation to the model's distribution obtained by simulating from the model and convolving the set of samples with a kernel (typically Gaussian). Unfortunately, KDE is notoriously inaccurate for estimating likelihood in high dimensions, because it is hard to tile a high-dimensional manifold with spherical Gaussians (Theis et al.2016).\nIn generative modelling, a decoder network is often used to define a generative distribution by transforming samples from some simple distribution (e.g. normal) to the data manifold. In this\nA variational autoencoder (VAE) (Kingma & Welling. 2014) is a probabilistic directed graphica model. It is defined by a joint distribution over a set of latent random variables z and the observec variables x: p(x, z) = p(x|z)p(z). The prior over the latent random variables, p(z), is usually chosen to be a standard Gaussian distribution. The data likelihood p(x[z) is usually a Gaussian or Bernoulli distribution whose parameters depend on z through a deep neural network, known as the decoder. network. It also uses an approximate inference model called an encoder or recognition network, that. serves as a variational approximation q(z[x) to the posterior p(z[x). The decoder network and the encoder networks are jointly trained to maximize the evidence lower bound (ELBO):\nlogp(x) Eq(z|x)[logp(x[z)]- KL(q(z|x)|p(z))\nA generative adversarial network (GAN) (Goodfellow et al.2014) is a generative model trained by game between a decoder network and a discriminator network. It defines the generative model b sampling the latent variable z from some simple prior distribution p(z) (e.g., Gaussian) followe through the decoder network. The discriminator network D(.) outputs a probability of a given sampl coming from the data distribution. Its task is to distinguish samples from the generator distributio from real data. The decoder network, on the other hand, tries to produce samples as realistic a possible, in order to fool the discriminator into accepting its outputs as being real. The competitio between the two networks results in the following minimax problem:"}, {"section_index": "3", "section_name": "2.1.3 GENERATIVE MOMENT MATCHING NETWORK (GMMN)", "section_text": "Generative moment matching networks (GMMNs) (Li & Swersky2015] Dziugaite et al.2015) adop maximum mean discrepancy (MMD) as the training objective, a moment matching criterion where kernel mean embedding techniques are used to avoid unnecessary assumptions of the distributions. I1 has the same issue as GAN in that the log-likelihood is undefined.\nWe are interested in estimating the probability p(x) = S p(z)p(x|z) dz a model assigns to an observation x. This is equivalent to computing the normalizing constant of the unnormalized distribution f(z) = p(z,x). One naive approach is likelihood weighting, where one samples. {z(k) 7 K }k=1. ~ p(z) and averages the conditional likelihoods p(x|z(k). This is justified by the following identity:\n(x,2 p(z) dz = Ez~p(z)[p(x|z)]\nLikelihood weighting can be viewed as simple importance sampling, where the proposal distribution is the prior p(z) and the target distribution is the posterior p(z[x). Unfortunately, importance sampling works well only when the proposal distribution is a good match for the target distribution. For the models considered in this paper, the (very broad) prior can be drastically different than the (highly concentrated) posterior, leading to inaccurate estimates of the likelihood.\nAnnealed importance sampling (AIS; Neal2001) is a Monte Carlo algorithm commonly used to. estimate (ratios of) normalizing constants. Roughly speaking, it computes a sequence of importance\nmin max Ez~pdata[log D(x)] + Ez~p(z)[log(1 - D(G(z))] G D\nUnlike VAE, the objective is not explicitly related to the log-likelihood of the data. Moreover. the generative distribution is a deterministic mapping, i.e., p(x[z) is a Dirac delta distribution parametrized by the deterministic decoder. This can make data likelihood ill-defined, as the probability density of any particular point x can be either infinite, or exactly zero..\nsampling based estimates, each of which is stable because it involves two distributions which are very similar. In particular, suppose one is interested in estimating the normalizing constant Z = I f(z) dz of an unnormalized distribution f(z). (In the likelihood estimation setting, f(z) = p(z, x) and Z = p(x).) One must specify a sequence of distributions q1, ., qT, where qt = ft/Zt, and fT = f is the target distribution. It is required that one can obtain one or more exact samples from the initial distribution q1. One must also specify a sequence of reversible MCMC transition operators T1, ..., TT where Tt leaves qt invariant.\nAIS produces a (nonnegative) unbiased estimate of the ratio ZT/Z1 as follows: first, we sample random initial state z1 ~ q1 and set the initial weight w1 = 1. For every stage t 2 we update th weight w and sample the state zt according to\nZt ~ Tt(z|Zt-1] WtIWt-\nAs shown by|Neal (2001), under certain regularity conditions, the variance of ZT tends to zero as the number of intermediate distributions is increased. AIS is very effective in practice, and has been used to estimate normalizing constants of complex high-dimensional distributions (Salakhutdinov & Murray2008).\nAIS provides a nonnegative unbiased estimate p(x) of p(x). However, it is often more practical to. estimate p(x) in the log space, i.e. log p(x), because of underflow problem of dealing with many products of probability measure. In general, we note that logarithm of a nonnegative unbiased estimate is a stochastic lower bound of the log estimand (Grosse et al.2015). In particular, log p(x) is a. stochastic lower bound on log p(x), satisfying E[log p(x)] log p(x) and Pr(log p(x) > log p(x) + b) < e-b.\nGrosse et al.(2015) pointed out that if AIS is run in reverse starting from an exact posterior sample it yields an unbiased estimate of 1/p(x), which (by the above argument) can be seen as a stochastic upper bound on log p(x). The combination of lower and upper bounds from forward and reverse AIS is known as bidirectional Monte Carlo (BDMC). In many cases, the combination of bounds can pinpoint the true value quite precisely. While posterior sampling is just as hard as log-likelihood estimation (Jerrum et al.l 1986), in the case of log-likelihood estimation for simulated data, one has available a single exact posterior sample: the parameters and/or latent variables which generated the data. Because this trick is only applicable to simulated data, BDMC is most useful for measuring the accuracy of a log-likelihood estimator on simulated data.\nGrosse et al.(2016) observed that BDMC can also be used to validate posterior inference algorithms as the gap between upper and lower bounds is itself a bound on the KL divergence of approximate samples from the true posterior distribution"}, {"section_index": "4", "section_name": "3 METHODOLOGY", "section_text": "For a given generative distribution p(x, z) = p(z)p(x|z), our task is to measure the log-likelihood of. test examples log p(xtest). We first discuss how we define the generative distribution for decoder- based networks. For VAE, the generative distribution is defined in the standard way, where p(z) is a standard normal distribution and p(x[z) is a normal distribution parametrized by mean e(z) and. e(z), predicted by the generator given the latent code. However, the observation distribution for GANs and GMMNs is typically taken to be a delta function, so that the model's distribution covers\nAs demonstrated byNeal (20o1), this procedure produces a nonnegative weight wT such that E[wT] = ZT/Z1. Typically, Z is known, so one computes multiple independent AIS weights (K) 1 K setting, Z1 = 1 and ZT = p(x), so we denote this estimator as p(x)\nTypically, the unnormalized intermediate distributions are simply defined to be geometric averages ft(z) = f1(z)1-t fT(z)t, where the t are monotonically increasing parameters with 1 = 0 and T = 1. For f1(z) = p(z) and fT(z) = p(z, x), this gives\nft(z) =p(z) p(x|z)t\nonly a submanifold of the space of observables. In order for the likelihood to be well-defined, we follow the same assumption made when evaluating using Kernel Density Estimator (Parzen] 1962) we assume a Gaussian observation model with a fixed variance hyperparameter o2. We will refer to the distribution defined by this Gaussian observation model as po..\nObserve that the KDE estimate is given by\nK 1 po(x) = K k=1\nthe distribution po, which is an instance of simple importance sampling (SIS). Because SIS is an. unbiased estimator of the likelihood, log po(x) is a stochastic lower bound on log po(x) (Grosse et al.|2015). Unfortunately, SIS can result in very poor estimates when the evidence has low prior probability (i.e. the posterior is very dissimilar to the prior). This suggests that AIS might be able to. yield much more accurate log-likelihood estimates under po. We note that KDE can be viewed as a. special case of AIS where the number of intermediate distributions is set to O..\nWe now describe specifically how we carry out evaluation using AIS. In most of our experiments, w. choose the initial distribution of AIS to be p(z), the same prior distribution used in training decoder based models. If the model provides an encoder network (e.g., VAE), we can take the approximatec distribution predicted by the encoder q(z[x) as the initial distribution of the AIS chain. For continuou. data, we define the unnormalized density of target distribution to be the joint generative distributior with the Gaussian noise model, po (x, z) = po (x|z)p(z). For the small subset of experiments done on the binary data, we define the observation model to be a Bernoulli model with mean predictec by the decoder. Our intermediate distributions are geometric averages of the prior and posterior, a in Eqn.5] Since all of our experiments are done using continuous latent space, we use Hamiltoniar Monte Carlo (Neal2010) as the transition operator for sampling latent samples along annealing. Th valuation code is 11dOd otb++"}, {"section_index": "5", "section_name": "4 RELATED WORK", "section_text": "AIS is known to be a powerful technique of estimating the partition function of the model. One influential example was the use of AIS to evaluate deep belief networks (Salakhutdinov & Murray 2008). Although we used the same technique, the problem we consider is completely different. First of all, the model they consider is undirected graphical models, whereas decoder-based models are directed graphical models. Secondly, their model has a well-defined probabilistic density function in terms of energy function, whereas we need to consider different probabilistic model for one in which the the likelihood is ill-defined. In addition, we validate our estimates using BDMC.\nTheis et al.(2016) give an in-depth analysis of issues that might come up in evaluating generative. models. They also point out that a model that completely fails at modelling the proportion of modes. of the distribution might still achieve a high likelihood score. Salimans et al.(2016) propose ar. image-quality measure which they find to be highly correlated with human visual judgement. They. propose to feed the samples x of the model to the \"inception\"' model to obtain a conditional labe. distribution p(y[x), and evaluate the score defined by exp EKL(p(y|x)|[p(y)), which is motivatec by having a low entropy of p(y[x) but a large entropy of p(y). However, the measure is largely basec. on visual quality of the sample, and we argue that the visual quality can be a misleading way tc evaluate a model."}, {"section_index": "6", "section_name": "5.1 DATASETS", "section_text": "All of our experiments were performed on the MNIST dataset of images of handwritten digits (LeCun. et al.]1998). For consistency with prior work on evaluating decoder-based models, most of our. experiments used the continuous inputs. We dequantized the data following Uria et al.(2013), by adding a uniform noise of 556 to the data and rescaling it to be in [0, 1|D after dequantization. We 56 use the standard split of MNIST into 60,000 training and 10,000 test examples, and used 50,000. images from the training set for training, and remaining 10,o00 images for validation. In addition"}, {"section_index": "7", "section_name": "5.2 MODELS", "section_text": "For most of our experiments, we considered two decoder architectures: a small one with 10 laten dimensions, and a larger one with 50 latent dimensions. We use standard Normal distribution as prio. for training all of our models. All layers were fully connected, and the number of units in each laye was 10-64-256-256-1024-784 for the smaller architecture and 50-1024-1024-1024-784 for th larger one. We trained both architectures using the VAE, GAN, and GMMN objectives, resulting ir six networks which we refer to as VAE-10, VAE-50, etc. In general, the larger architecture performec substantially better on both the training and test sets, but we analyze the smaller architecture as. well because it better highlights some of the differences between the training criteria. Additiona architectural details are given in Appendix|A.1.\nIn order to enable a direct comparison between training criteria, all models used a spherical Gaussiar observation model with fixed variance. This is consistent with previous protocols for evaluating GANs and GMMNs. However, we note that this observation model is a nontrivial constraint on th VAEs. which could instead be trained with a more flexible diagonal Gaussian observation mode where the variances depend on the latent state. Such observation models can easily achieve mucl higher log-likelihood scores, for instance by noticing that boundary pixels are always close to 0. (E.g we trained a VAE with the more general observation model which achieved a log-likelihood of a least 2200 nats on continuous MNIST.) Therefore, the log-likelihood values we report should not be compared directly against networks which have a more flexible observation model."}, {"section_index": "8", "section_name": "5.3 VALIDATION OF LOG-LIKELIHOOD ESTIMATES", "section_text": "Before we analyze the performance of the trained networks, we must first determine the accuracy o the log-likelihood estimators. In this section, we validate the accuracy of our AIS-based estimate using BDMC. We then analyze the error in the KDE and IWAE estimates and highlight some cases where these measures miss important phenomena."}, {"section_index": "9", "section_name": "5.3.1 VALIDATION OF AIS", "section_text": "We used AIS to estimate log-likelihoods for all models under consideration. Except where otherwise specified, all AIS estimates were obtained using 16 independent chains, 10,000 intermediate distri butions of the form in Eqn.5 and a transition operator consisting of one proposed HMC trajectory with 10 leapfrog steps![Following|Ranzato et al.(2010), the HMC stepsize was tuned to achieve an acceptance rate of 0.65 (as recommended byNeal(2010)).\nFor all six models, we evaluated the accuracy of this estimation procedure using BDMC on dat sampled from the model's distribution on 1000 simulated examples. The gap between the log. likelihood estimates produced by forward AIS (which gives a lower bound) and reverse AIS (whicl gives an upper bound) bounds the error of the AIS estimates on simulated data. We refer to this. gap as the BDMC gap. For five of the six networks under consideration, we found the BDMC gap. to be less than 1 nat. For the remaining model (GAN-50), the gap was about 10 nats. Both gaps. are much smaller than our measured log-likelihood differences between models. If these gaps are. representative of the true error in the estimates on the real data, then this indicates AIS is accurate. enough to make fine-grained comparisons between models and to benchmark other log-likelihooc estimators. (The BDMC gap is not guaranteed to hold for the real data, although|Grosse et al.(2016 found the behavior of AIS to match closely between real and simulated data.)."}, {"section_index": "10", "section_name": "5.3.2 HOW ACCURATE IS KERNEL DENSITY ESTIMATION?", "section_text": "Kernel density estimation (KDE) (Parzen1962) is widely used to evaluate decoder-based models Goodfellow et al.]2014f Li & Swersky2015), and a variant was proposed in the setting of evaluating Boltzmann machines (Bengio et al.|2013). Papers reporting KDE estimates often caution that the\nGAN50 with varying variance 350 AIS vS.KDE AIS vs. IWAE 600 85.5 300 400 86.0 pooy!ll!-bol pooe!le!l-bol 200 86.5 0 87.0 -200 Train AIS KDE -IWAE Valid AIS 100 : AIS forward 87.5 . AIS Train KDE 400 + AIS backward + AIS+encoder Valid KDE -88%1 0.005 0.010 0.015 0.020 0.025 10 103 102 10 104 Variance Seconds Seconds aGAN-50LLDvs Variance (b) GMMN-10: LLD vs. Evaluation time\nGAN50 with varying variance 350 AIS vs. KDE 85.5 AIS vs. IWAE 600 400 300 86. 200 86.5 0 87.0 Train AIS -200 KDE IWAE Valid AIS AIS forward 87.5 100 AIS -Train KDE 400 AIS backward + AIS+encoder Valid KDE 5%1 88101 0.005 0.010 0.015 0.020 0.025 10 10 104 103 104 Variance Seconds Seconds\nFigure 2: (a) Log-likelihood of GAN-50, under different choices of variance parameter. (b) Log-likelihood of. GMMN-10 on 100 simulated examples evaluated by AIS and KDE vs. the corresponding running time. We show the BDMC gap converges to almost zero as we increase the running time. (c) Log-likelihood of IWAE on 10,000. test examples evaluated by AIS and IWAE bound vs. running time. (a), (b) are results on continuous MNIST. and (c) is on binarized MNIST. Note that AIS/AIS+encoder dominates the other estimate in both estimation. accuracy and running time\nKDE is not meant to be applied in high-dimensional spaces and that the results might therefore be inaccurate. Nevertheless. KDE remains the standard protocol for evaluating decoder-based models We analyzed the accuracy of the KDE estimates by comparing against AIS. Both estimates are stochastic lower bounds on the true log-likelihood (see Section|3), so larger values are guaranteed (with high probability) to be more accurate.\nFor each estimator, we varied one parameter influencing the computational budget; for AIs, this was the number of intermediate distributions (chosen from 100, 500, 1000, 2000, 10000 D, and for KDE, it was the number of samples (chosen from {10000, 100000, 500000, 1000000, 2000000}) Using GMMN-10 for illustration, we plot both log-likelihood estimates 100 simulated examples as a function of evaluation time in Fig.2(b). We also plot the upper bound of likelihood given by running AIS in reverse direction. We see that the BDMC gap approaches to zero, validating the accuracy of AIS. We also see that the AIS estimator achieves much more accurate estimates during similar evaluation time. Furthermore, the KDE estimates appear to level off, suggesting one cannot obtain accurate results even using orders of magnitude more samples.\nThe KDE estimation error also impacts the estimate of the observation noise , since a large value of is needed for the samples to cover the full distribution. We compared the log-likelihoods estimated by AIS and KDE with varying choices of on 100 training and validation examples of MNIST. We used 1 million simulated samples for KDE evaluation, which takes almost the same time as running AIS estimation. In Fig.2[a), we show the log-likelihood of GAN-50 estimated by KDE and AIS as a function of o. Because the accuracy of KDE declines sharply for small o values, it creates a strong bias towards large o."}, {"section_index": "11", "section_name": "5.3.3 HOW ACCURATE IS THE IWAE BOUND?", "section_text": "In principle, one could estimate VAE likelihoods using the VAE objective function (which is a lower. bound on the true log-likelihood). However, it is more common to use importance weighting, where. the proposal distribution is computed by the recognition network. This is provably more accurate than the VAE bound (Burda et al.|2016). Because the importance weighted estimate corresponds to the objective function used by the Importance Weighted Autoencoder (IWAE) (Burda et al.]2016). we will refer to it as the IWAE bound..\nOn continuous MNIST, the IWAE bound underestimated the true log-likelihoods by at least 33.2 nats on the training set and 187.4 nats on the test set. While this is considerably more accurate than. KDE, the error is still significant. Interestingly, this result also suggests that the recognition network. overfits the training data.\nTable 1: AIS vs. IWAE bound on 10.o00 test examples of binarized MNIST. \"# dist\" denotes the number of intermediate distributions used for evalution. We find AIS estimate is consistently 1 nat higher than IWAE bound; AIS+encoder can achieve about the same estimate as AIS, but with 1 order of magnitude less number of intermediate distributions.\n(Nats) AIS Test AIS Train BDMC gap KDE Test IWAE Test VAE-50 991.4356.477 1298.8300.863 1.540 351.213 826.325 GAN-50 627.2978.813 648.28321.115 10.045 300.331 GMMN-50 593.4728.591 607.2721.451 1.146 277.193 / VAE-10 705.3757.411 791.0290.810 0.832 408.659 486.466 GAN-10 328.7725.538 346.6404.260 0.934 259.673 GMMN-10 346.6795.860 358.9436.485 0.605 262.73\nTable 2: Model comparisons on 1000 test and training examples of continuous MNIST. Confidence intervals reflect the variability from the choice of training or test examples (which appears to be the dominant source of error for the AIS values). AIS, KDE, and IWAE are all stochastic lower bounds on the log-likelihood.\nSince VAE and IWAE results have customarily been reported on binarized MNIST, we additionall. trained an IWAE in this setting. The training details are given in Appendix [A.2. To show the practicality of our method, we evaluated the IWAE on the full 100o0 test using AIS and IWAI. bound, with different choices of intermediate distribution and number of simulated samples, showr in Table[1 We also evaluate AIS with the initial distribution defined by encoders of VAEs, denotec as AIS+encoder. We find that the IWAE bound underestimates the true value by at least 1 nat, whicl is a large difference by the standards of binarized MNIST. (E.g., it represents about half of the gap. between a state-of-the-art permutation-invariant model (Tran et al.|2016) and one which exploit structure (van den Oord et al.]2016).) The AIS and IWAE estimates are compared in terms o evaluation time in Fig.2(c)."}, {"section_index": "12", "section_name": "5.4 SCIENTIFIC FINDINGS", "section_text": "Having validated the accuracy of AIS, we now use it to analyze the effectiveness of various training criteria. We also highlight phenomena which would not be observable using existing log-likelihood estimators or by inspecting samples. For all experiments in this section, we used 10,000 intermediate distributions for AIS, 1 million simulated samples for KDE, and 200,000 importance samples for the IWAE bound. (These settings resulted in similar computation time for all three estimators.)"}, {"section_index": "13", "section_name": "5.4.1 MODEL LIKELIHOOD COMPARISON", "section_text": "We evaluated the trained models using AIS and KDE on 1000 test examples of MNIST; results are. shown in Table 2 We find that for all three training criteria, the larger architectures consistently. outperformed the smaller ones. We also find that for both the 10- and 50-dimensional architectures the VAEs achieved substantially higher log-likelihoods than GANs or GMMNs. It is not surprising. that the VAEs achieved higher likelihood, because they were trained using a likelihood-based objective. while the GANs and GMMNs were not. However, it is interesting that the difference in log-likelihoods. was so large; in the rest of this section, we attempt to analyze what exactly is causing this large. difference.\nOne question that arises in evaluation of decoder-based generative models is whether they memorize parts of the training dataset. One cannot test this by looking only at model samples. The commonly reported nearest-neighbors from the training set can be misleading (Theis et al.[2016), and interpola tion in the latent space between different samples can be visually appealing, but does not provide a quantitative measure of the degree of generalization.\nWe note that the KDE errors were of the same order of magnitude as the differences between models. indicating that it cannot be used reliably to compare log-likelihoods. Furthermore, KDE did not identify the correct ordering of models; for instance, it estimated a lower log-likelihood for VAE- 50 than for VAE-10, even though its true log-likelihood was almost 300 nats higher. KDE also underestimated by an order of magnitude the log-likelihood improvements that resulted from using the larger architectures. (E.g., it estimated a 15 nat difference between GMMN-10 and GMMN-50. even though the true difference was 247 nats as estimated by AIS.)\nThese differences are also hard to observe simply by looking at samples; for instance, we were unable to visually distinguish the quality of samples for GAN-10 and GAN-50 (see Fig.[1), even though their log-likelihoods differed by almost 300 nats on both the training and test sets\nGAN50 training curves VAE50 training curves 650 GMMN50 training curves 600 600 1200 550 500 %500 1000 noy Train AIS 450 400 800 Valid AIS Train KDE Train AIS --- Valid KDE -Valid KDE 600 Valid AIS .... Train IWAE 300 -- Train KDE .... Valid IWAE Train AIS --- Train KDE 400 250 200 Valid AIS --- Valid KDE 200 100200 400 600 800 1000 100200 400 600 800 1000 2000 4000 6000 8000 10000 number of Epochs number of Epochs number of Epochs (a) GAN-50: LLD vs. Num epochs (b) VAE-50: LLD vs. Num epochs (c) GMMN-50: LLD vs. Num epochs\nTo analyze the degree of overfitting, Fig.3|shows training curves for three networks as measured by AIS, KDE, and the IWAE bound. We observe that GAN-50's training and test log-likelihoods are nearly identical throughout training, disconfirming the hypothesis that it was memorizing training. data. Both GAN-50 and GMMN-50 overfit less than VAE-50.\nWe also observed two phenomena which could not be measured using existing techniques. First. in the case of VAE-50, the IWAE lower bound starts to decline after 200 epochs, while the AIS. estimates hold steady, suggesting it is the recognition network rather than the generative network. which is overfitting most. Second, the GMMN-50 training and validation error continue to improve. at 10,o00 epochs, even though KDE erroneously indicates that performance has leveled off."}, {"section_index": "14", "section_name": "5.4.3 HOW APPROPRIATE IS THE OBSERVATION MODEL?", "section_text": "Appendix|B|addresses the questions of whether the spherical Gaussian observation model is a good. fit and whether the log-likelihood differences could be an artifact of the observation model. We find that all of the models can be substantially improved by accounting for non-Gaussianity, but that this. effect is insufficient to explain the gap between the VAEs and the other models..\nIt was previously observed that one of the potential failure modes of Boltzmann machines is to fail tc generate one or more modes of a distribution or to drastically misallocate probability mass between. modes (Salakhutdinov & Murray20o8). Here we analyze this for decoder-based models..\nHowever, MNIST has many factors of variability beyond simply the 10 digit classes. In order tc. determine whether any of the models missed more fine-grained modes, we visualized posterio. samples for each model conditioned on training and test images. In particular, for each image x under consideration, we used AIS to approximately sample z from the posterior distribution p(z|x and then ran the decoder on z. While these samples are approximate, Grosse et al.(2016) poin. out that the BDMC gap also bounds the KL divergence of approximate samples from the true. posterior. With the exception of GAN-50, our BDMC gaps were on the order of 1 nat, suggesting. our approximate posterior samples are fairly representative. The results are shown in Fig.4 Furthe. posterior visualizations for digit class 2 (the most difficult for the models we considered) are showr in Appendix C\nBoth VAEs' posterior samples match the observations almost perfectly. (We observed a few poorly reconstructed examples on the test set, but not on the training set.) The GANs and GMMNs fail to\nFigure 3: Training curves for (a) GAN-50, (b) VAE-50, and (c) GMMN-10, as measured by AIS, KDE, and (if applicable) the IWAE lower bound. All estimates shown here are lower bounds. In (c), the gap between training and validation log-likelihoods is not fairly small (see Table[2).\nFirst, we ask a coarse-grained version of this question: do the networks allocate probability mass correctly between the 10 digit classes, and if not, can this explain the difference in log-likelihood scores? In Fig.[1] we see that GAN-50's distribution of digit classes was heavily skewed: out of 100 samples, it generated 37 images of 1's, but only a single 2. This appears to be a large effect, but it does not explain the magnitude of the log-likelihood difference from VAEs. In particular, if the allocation of digit classes were off by a factor of 10, this effect by itself could cost at most log 10 ~ 2.3 nats of log-likelihood. Since VAE-50 outperformed GAN-50 by 364 nats, this effect cannot explain the difference.\na) The visualization of posterior of 10 training example\nFigure 4: (a) and (b) show visualization of posterior samples of 10 training/validation examples. (c) shows visualization of posterior samples of 10 training examples of digit *2\". Each column of 10 digits comes from true data and the six models. The order of visualization is: True data, GAN-10, VAE-10, GMMN-10, GAN-50 VAE-50, GMMN-50.\nreconstruct some of the examples on both the training and validation sets, suggesting that they failec to learn some modes of the distribution."}, {"section_index": "15", "section_name": "ACKNOWLEDGMENTS", "section_text": "Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, and et al. Theano: A python framework for fas computation of mathematical expressions, 2016\nLaurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp arXiv:1605.08803, 2016.\nIan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling.. C. Cortes, N. D. Lawrence, and K. Q. Weinberger (eds.), Advances in Neural Information Process-. ing Systems 27, pp. 2672-2680. Curran Associates, Inc., 2014. URLhttp: / /papers. nips\nData 0 z 3 4 5 6 7 8 q 0 1 a 3 4 s 6 7 9 2 2 2 ? 2 GAN10 0 1 2 3 4 6 7 to q 0 c In 4 6 7 S 9 2 2 2 2 4 2 VAE10 0 z 3 4 5 6 7 8 q 0 M 4 6 7 % 9 2 2 3 2 2 2 GMMN10 0 1 3 4 5 6 7 8 q 3 4 5 6 3 5 9 2 2 + 2 2 2 GAN50 0 - z 3 4 6 7 8 q 3. 4 s 6 + $ 9 2 2 2 2 2 2 VAE50 0 1 z 3 4 5 6 7 8 q 0 3 y 6 9 2 2 2 N ? L 2 GMMN50 0 3 4 6 7 8 9 1 a 3 5 6 a 9 2 2 3 2 2 + 2\n(c)The visualization of posterior of 10 ex. amples of digit \"2\" of training set\n(b)The visualization of posterior of 10 validation examples\nWacha Bounliphone, Eugene Belilovsky, Matthew B. Blaschko, Ioannis Antonoglou, and Arthur. Gretton. A test of relative similarity for model selection in generative models. In ICLR. 2016.\nMark R. Jerrum, Leslie G. Valiant, and Vijay V. Vazirani. Random generation of combinatoria structures from a uniform distribution. Theoretical Computer Science, 43:169-188, 1986\nDiederik P. Kingma and Max Welling. Auto-encoding variational bayes. In ICLR, 2014.\nYujia Li and Kevin Swersky. Generative moment matching networks. In In ICML 32, 2015.\nRadford M. Neal. MCMC using Hamiltonian dynamics. Handbook of Markov Chain Monte Carlo 54:113-162, 2010.\nAlec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, 2016\nTim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen Improved techniques for training gans. In NIPS, 2016.\nLucas Theis, Aaron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. In ICLR, 2016.\nAaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks In ICML, 2016.\nFor training GAN/VAE, we use our own implementation. We use Adam for optimization, anc. oerform grid search of learning rate from {0.001, 0.0001,0.00001}. For training GMMN, we take the implementation from https://github. com/yujiali/gmmn.git Following th implementation, we use SGD with momentum for optimization, and perform grid search of learning. rate from {0.1, 0.5, 1, 2}, with momentum 0.9."}, {"section_index": "16", "section_name": "A.2 MODELS ON BINARIZED MNIST", "section_text": "Its decoder has the architecture 50-200-200-784 with all tanh hidden layers and sigmoid output layer and its encoder is symmetric in architecture, with linear output layer. We take the implementation a https : //github. com/yburda/iwae. gitfor training the IWAE model.The IWAE bound was computed with 50 samples during training. We keep all the hyperparameter choices the same as in the implementation.\nJun-Yan Zhu, Philipp Krahenbuhl, Eli Shechtman, and Alexei A. Efros. Generative visual manipula tion on the natural image manifold. In Proceedings of European Conference on Computer Vision (ECCV), 2016.\nThe decoders have all fully connected layers, and the number of units in each layer was 10-64-256 256-1024-784 for the smaller architecture and 50-1024-1024-1024-784 for the larger one. Other architecture details are summarized as follows..\nFor GAN-10, we used a discriminator with the architecture 784-512-256-1, where each laye. used dropout with parameter 0.5. For GAN-50, we used a discriminator with architecture 784-4096-4096-4096-4096-1. All hidden layers used dropout with parameter 0.8. All hidder. layers in both networks used the tanh activation function, and the output layers used the. logistic function. The larger model uses an encoder of an architecture 784-1024-1024-1024-100. We adc dropout layer between each hidden layer, with a dropout rate of O.2. The smaller model use.. an encoder of an architecture 784-256-64-20. Generator's hidden layers use tanh activatior function, and the output layer uses sigmoid unit. Encoder's hidden layers use tanh activatior. function, and the output layer uses linear activation.. GMMN: The hidden layers use ReLU activation function, and the output layer uses sigomic. unit.\n(Nats) Train Valid Optimal Fixed Improvement Optimal Fixed Improvement GAN-50 711.405 620.498 90.907 702.699 623.492 79.207 GMMN-50 655.807 571.803 84.004 661.652 594.612 67.040 GAN-10 376.788 318.948 57.840 368.585 316.614 51.971 GMMN-10 393.976 345.177 48.799 371.325 332.360 38.965\nIn this section, we consider whether the difference in log-likelihood between models could be an artifact of the Gaussian noise model (which we know to be a poor fit). In principle, the Gaussian noise. assumption could be unfair to the GANs and GMMNs, because the VAE training uses the correct.\nTo determine the size of this effect, we evaluated the models under a different regime where, instead of choosing a fixed value of the observation noise o on a validation set, was tuned independently for each example||This is not a proper generative model, but it can be viewed as an upper bound on the log-likelihood that would be achievable with a heavy-tailed and radially symmetric noise model|3 Results are shown in Table|3] We see that adapting o for each example results in a log-likelihood improvement between 30 and 100 nats for all of the networks. In general, the examples which show the largest performance jump are images of 1's (which prefer smaller o) and 2's (which prefer larger ). This is a significant effect, and suggests that one could significantly improve the log-likelihood scores by picking a better observation model. However, this effect is smaller in magnitude than the differences between VAE and GAN/GMMN log-likelihoods, so it fails to explain the likelihood difference.\nPOSTERIOR VISUALIZATION OF DIGIT \"2\nAccording to the log-likelihood evaluation, we find digit \"2\" is the hardest digit for modelling. In thi. section we investigate the quality of modelling \"2\" of each model. We randomly sampled a fixec set of 100 samples of digit *2\" from training data and compare whether model capture this mode We show the plots of \"2\" for GAN-10, GAN-50, VAE-10 and true data in the following figures fo illustration. We see that GAN-10 fails at capturing many instances of digit \"2\" in the training data We see instead of generating \"2\", it tries to generate digit \"1\", \"7\" \"9\", \"4\", \"8\" from reconstruction GAN-50 does much better, its reconstruction are all digit \"2\" and there is only some style difference from the true data. VAE-10 totally dominates this competition, where it perfectly reconstructs all the samples of digit \"2\". We emphasize if directly sampling from each model, samples look visually indistinguishable (see Fig. 1), but we can clearly see differences in posterior samples.\nIn particular, heavy-tailed radially symmetric distributions can be viewed as Gaussian scale mixtures Wainwright & Simoncellil1999). I.e., one has a prior distribution on o (possibly learned) and integrates it ou for each test example. Clearly the probability under such a mixture cannot exceed the maximum value with respect to o.\n2we pick the best variance parameter among {0.005, 0.01, 0.015, 0.02, 0.025} for each training/validation examples when evaluating GAN-50 and GMMN-50 and {0.015, 0.02, 0.025, 0.03, 0.035} when evaluating GAN-10 and GMMN-10.\nFigure 5: Posterior samples of digit \"2\" for GAN-10\n2 22 2 7 3 7. 3 a 2 2 2 2 3 7. 2 2 2 2. 2 2 3 2 3 2 4. 72 3 7 2 2.3 ? 2 2 3 7. 2 22222 2. 5. 7 7221 2 7 C 7\nFigure 6: Posterior samples of digit \"2\" for GAN-50\n2 2 Z 2 2 9 2 2 2. 2 2 2 2 2 2 2. 2 2. Z 2 2 2 2. 2 2. 2 2 2 2 2 2 2 ? 2 2. 2 2 2 2 3 ) 2 2 2 2. 22 2 2 9 2 4 2. 222 ? 0 2\nFigure 7: Posterior samples of digit \"2\" for VAE-10\n2 2 2 2 2 ? 2 a 2 2 2 2 2 a 2 a 2 2 2 2 a 2 2 3 2 2 z 2. 2 2 2 2 2 2 2 2 a a 2 2 2 2 ? 7. 2 2 ? 2 2 2 2 2 2 2 2 2 a ) 2 2 2 2 2 2 2 Q 2 2 2 2 2 a 2 a\nFigure 8: 100 digit \"2\" from training data\n2 2 2 3 2 2 2 2 2 2 a a 2 2 3 2 a 2 2 a & 2 2 2 L 2 2 2 2 a 2 2 2 2 a a 2 a 2 2 a 2 2 ? 2 2 2 2 2 2 2 \"t 2 d 0 2 L 2 2 z 3 2 2 Q R 2 2 2 2 ? a 2 a 3"}] |
HkNRsU5ge | [{"section_index": "0", "section_name": "SIGMA-DELTA OUANTIZED NETWORKS", "section_text": "Peter O'Connor, Max Welling\nQUVA Lab, Informatics Institute University of Amsterdam."}, {"section_index": "1", "section_name": "INTRODUCTION", "section_text": "For most deep-learning architectures, the amount of computation required to process a sample of inpu data is independent of the contents of that data.\nNatural data tends to contain a great deal of spatial and temporal redundancy. Researchers have taken. advantage of such redundancy to design encoding schemes, like jpeg and mpeg, which introduce small compromises to image fidelity in exchange for substantial savings in the amount of memory required to store images and videos.\nIn neuroscience, it seems clear that that some kind of sparse spatio-temporal coding is going on.Koch et al.(2006) estimate that the human retina transmits 8.75Mbps, which is about the same as compressed 1080p video at 30FPS.\nThus it seems natural to think that perhaps we should be doing this in deep learning. In this paper, we propose a neural network where neurons only communicate discretized changes in their activations to one another. The computational cost of running such a network would be proportional to the amount of change in the input. Neurons send signals when the change in their input accumulates past some threshold, at which point they send a discrete \"spike'\"' notifying downstream neurons of the change Such a system has at least two advantages over the conventional way of doing things."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Deep neural networks can be obscenely wasteful. When processing video, a convolu. tional network expends a fixed amount of computation for each frame with no regard to the similarity between neighbouring frames. As a result, it ends up repeatedly. doing very similar computations. To put an end to such waste, we introduce Sigma. Delta networks. With each new input, each layer in this network sends a discretized. form of its change in activation to the next layer. Thus the amount of computation. that the network does scales with the amount of change in the input and layer acti. vations, rather than the size of the network. We introduce an optimization method. for converting any pre-trained deep network into an optimally efficient Sigma-Delta. network, and show that our algorithm, if run on the appropriate hardware, could cut. at least an order of magnitude from the computational cost of processing video data.\nWhen extracting features from temporally redundant data, it is much more efncient to com municate the changes in activation than it is to re-process each frame. 2. When receiving data asynchronously from different sources (e.g. sensors, or nodes in a dis- tributed network) at different rates, it no longer makes sense to have a global network update. We could recompute the network with every new input, reusing the stale inputs from the other sources, but this requires doing a great deal of repeated computation for only small differences in input data. We could keep a history of all inputs and update the network periodically, but then we lose the ability to respond immediately to new inputs. Our approach gets around this ugly tradeoff by allowing for efficient approximate updates of the network given a partial up- date to the input data. The computational cost of the update is proportional to the effect that the new information has on the network's state.\nThis work originated in the study of spiking neural networks, but treads into the territory of discretizing. neural nets. The most closely related work is that ofZambrano and Bohte (2016). In this work, the. authors describe an Adaptive Sigma-Delta modulation method, in which neurons communicate analog. signals to one another by means of a \"spike-encoding\" mechanism, where a temporal signal is encodec. into a sequence of weighted spikes and then approximately decoded as a sum of temporally-shifted ex. ponential kernels. The authors create a scheme for being parsimonious with spikes by allowing adaptive. scaling of thresholds, at the cost of sending spikes with real values attached to them, rather than the classic \"all or nothing\" spikes. Their work references a slightly earlier work by[Yoon(2016) which. reframes common neural models as forms of Asynchronous Sigma-Delta modulation. In a concurren work, Lee et al.(2016) implement backpropagation in a similar system (but without adaptive thresh. old scaling), and demonstrate the best-yet performance on MNIST for networks trained with spiking. models. This work postdates Diehl et al.(2015), which proposes a scheme for normalizing neuror. activations so that a spiking neural network can be optimized for fast classification..\nOur model contrasts with all of the above in that it is time-agnostic. Although we refer to sending \"temporal differences\" between neurons, our neurons have no concept of time - their is no \"leak' in neuron potential, and our neurons' behaviour only depends on the order of the inputs. Our work also separates the concepts of nonlinearity and discretization, uses units that communicate differences rather than absolute signal values, and explicitly minimizes an objective function corresponding to computational cost.\nComing from another corner, Courbariaux et al. describe a scheme for binarizing networks with th. aim of achieving reductions in the amount of computation and memory required to run neural nets They introduce a number of tricks for training binarized neural networks - a normally difficult task du. to the lack of gradient information. Esser et al.[(2016) use a similar binarization scheme to efficientl. implement a spiking neural network on the IBM TrueNorth chip. Ardakani et al.(2015) take anothe. approach - to approximate real-valued operations of a neural net with a sequence of stochastic intege. operations, and show how these can lead to cheaper computation..\nThese discretization approaches differ from ours in that they do not aim to take advantage of tempora. redundancy in data, but rather aim to find ways of saving computation by learning in a low-precision regime. Ideas from these works could be combined with the ideas presented in this paper..\nThe idea of sending quantized temporal differences has been applied to make event-based sensors. such as the Dynamic-Vision Sensor (Lichtsteiner et al.||2008), which quantize changes in analog pixel- voltages and send out pixel-change events asynchronously. The model we propose in this paper could. be used to efficiently process the outputs of such sensors..\nFinally, our previous work, (O'Connor and Welling 2016) develops a method for doing backpropaga tion with the same type of time-agnostic spiking neurons we use here. In this paper, we do not aim to train the network from scratch, but instead focus on how we can compute efficiently by sending temporal differences between neurons.\nIn this Section, we describe how we start with a traditional deep neural network and apply two modi fications - temporal-difference communication and rounding - to create the Sigma-Delta network. Tc explain the network, we follow the Figure 1|from top to bottom, starting with a standard deep net- work and progressing to our Sigma-Delta network. Here, we will think of the forward pass of a neural network as composition of subfunctions: f(x) = (.fL o ... o f2 o f1)(x).\nWe now define \"temporal difference\" (T) and \"temporal integration\"' (T) modules as follows\nAlgorithm 1 Temporal Difference (T): Algorithm 2 Temporal Integration (T): 1: Internal: xlast E Rd 0 1: Internal: y e Rd 0 2: Input: x E Rd 2: Input: x E Rd 3: yx-xlast 3: yy+x 4: xlast + x 4: Return: y E Rd 5: Return: y E Rd\nSo that when presented with a sequence of inputs x1, ...xt, T(xt) = xt - xt-1|xo=0, and T(xt) t=1 . It should be noted that when we refer to \"temporal differences\", we refer not to the change in the signal over time, but in the change between two inputs presented sequentially. The output of our network only depends on the value and order of inputs, not on the temporal spacing between them. Thi distinction only matters when dealing with asynchronous inputs such as the Dynamic Vision Sensor (Lichtsteiner et al.2008), which are not considered in this paper.\nTherefore we can replace all instances of T o w o T with w, yielding f(x) = (ht o T o wt o ... o. T o h2 o T o w2 o T o h1 o T o w1 o T)(x), which corresponds to the network shown in Figure1. B. For now this is completely pointless, since we do not change the network function at all, but it will come in handy in the next section, where we discretize the output of the T modules.."}, {"section_index": "3", "section_name": "3.2 DISCRETIZATING THE DELTAS", "section_text": "When dealing with data that is naturally spatiotemporally redundant, like most video, we expect the output of the T modules to be a vector with mostly low values, with some peaks corresponding tc temporal transitions at certain input positions. We expect the data to have this property not only at th input layer, but even more so at higher layers, which encode higher level features (edges, object parts class labels), which we would expect to vary more slowly over time than pixel values. If we discretize this \"peaky\"' vector, we end up with a sparse vector of integers, which can then be used to cheaply communicate the approximate change in state of a layer to its downstream layer(s).\nA sensible approach is to apply rounding before the temporal-difference operation - i.e. round the. activation values and then send the temporal differences of these rounded values. It is then easy to show that the network's function will remain identical to that of the rounding network:.\nT(w(T(round(x)))) = w(T(T(round(x)))) = w(round(x)\nIt follows from this result that our Sigma-Delta network depicted in Figure 1D computes an identical. function to that of the rounding network in Figure[1|C. In other words, the output yt of the Sigma-Delta. network is solely dependent on the parameters of the network and the current input xt, and not on any of the previous inputs x1..xt-1. The amount of computation required for the update, however, depends. on xt-1. Specifically, if xt is similar to xt-1, the Sigma-Delta network should require less computation to perform an update than the Rounding Network..\nNow suppose our network consists of alternating linear functions w(x), and nonlinear functions h(x) so that f(x) = (ht o wz... o h2 o w2 o h1 o wi)(x). As before, we can harmlessly insert our T o . pairs into the network. But this time, note that for a linear function w(x), the operations (T, w, T. all commute with one another. That is:.\nAT(w(T(x))) = w(T(T(x))) =w(x"}, {"section_index": "4", "section_name": "3.3 SPARSE DOT PRODUCT", "section_text": "Most of the computation in Deep Neural networks is consumed doing matrix multiplications and con volutions. The architecture we propose saves computation by translating the input to these operations into an integer array with a small L1 norm.\nComputing the dot product this way takes N : dout additions. A normal dense dot-product, by compar ison, takes din : dout multiplications and (din - 1) : dout additions.\nThis is where the energy savings come in.Horowitz[(2014) estimates that on current 45nm silicon pro. cess, a 32-bit floating-point multiplication costs 3.7pJ, vs O.9pJ for floating-point addition. With integer. or fixed-point arithmetic, the difference is even more pronounced, with 3.1pJ for multiplication vs 0.1pJ for addition. This of course ignores the larger cost of processing instructions and moving memory, but. gives us an idea of how these operations might compare on optimized hardware. So provided we can. approximate the forward pass of a network to a satisfactory degree of precision without doing many. more operations than the original network, we can potentially compute much more efficiently..\nContinuous, Dense Signal Discrete, Dense Signal Continuous Sparse Signal Discrete Sparse Signal 1..9 Numbers indicate identical signals. A: Original Network. 0 3 input wj(x) hy(x) w,(x) h,(x) output B: Temporal Difference Network 0 input w(x) hj(x) W,(x) h2(x) output C: Rounding Network. 6 8 9 input-> round(x) wq(x) h7(x) round(x) w,(x) h2(x) output D: Sigma Delta Network. 0 6 8 input round(x) 4T wj(x) hj(x) round(x) w,(x) h,(x) output\n0 6 8 9 input- round(x) w(x) h,(x) round(x) w,(x) h,(x) outpu\nO 5 8 9 w7(x) E. hy(x) W h2(x) input round(x) A round(x) A W2(x) output\nFigure 1: A: An ordinary deep network, which consists of an alternating sequence of linear operations w;(x), and nonlinear transforms h,(x). B: The Temporal-Difference Network, described in Section[3.1. computes the exact same function as network A, but communicates differences in activation between layers. C: An approximation of network A where activations are rounded before being sent to the. next layer. D: The Sigma-Delta Network combines the modifications of B and C. Functionally, it is. identical to the Rounding Network, but it can compute forward passes more cheaply when input data is. temporally redundant.\nFigure |1|visually summarizes the four types of network we have described. Inserting the tempora sum and difference modules discussed in Section|3.1|leads to the Temporal Difference Network, whicl\nWith sparse, low-magnitude integer input, we can compute the vector-matrix dot product efficiently by lecomposing it into a sequence of vector additions. We can see this by decomposing the vector x n=1 Snein where ein is a one-hot vector with element in hot, and N = xL1 is the total L1 magnitude of the vector. We can then compute the dot-product as a series of additions, as shown in Equation[3]\nl=x. W :W E R aout N N N . W = Snein : W =. ) S n=1 n=1 n=1\n3 wj(x) h,(x) w,(x) h,(x) output\nO 5 3 4 wj(x) W hy(x) W2(x) W h2(x) output\nIn this work, we do not aim to train a quantized networks from scratch, as we did in O'Connor and Welling (2016). Rather, we will take existing pretrained networks and optimize them as Sigma-Delta networks. In in our situation, we have two competing objectives: Error (with respect to a non-quantized forward pass), and Computation: the number of additions performed in a forward pass."}, {"section_index": "5", "section_name": "4.1 RESCALING OUR NEURONS", "section_text": "We can control the trade-off between these objectives by changing the scale of our discretization. We can thus extend our rounding function by adding a scale k E R+:\nThis scale can either be layerwise or unitwise (in which case we have a vector of scales per layer Higher k values will lead to higher precision, but also more computation, for the reason mentioned ir Section [3.2 Note that the final division-by-k is equivalent to scaling the following weight matrix by . ,. So in practice, our network functions become:\nWL W1 round o .kt o... o h1 0 o round o .k. k1 W1 o round o :k1 o 7 E L\nFor the Rounding Network and the Sigma-Delta Network, respectively. By adjusting these scales k1, we can affect the tradeoff between computation and error. Note that if we use ReLU activation functions parameters k, can simply be baked into the parameters of the network (see Appendix|C."}, {"section_index": "6", "section_name": "4.2 THE ART OF COMPROMISE", "section_text": "In this section, we aim to find the optimal trade-offs between Error and Computation for the Rounding Network (Network C in Figure|1). We define our loss as follows:.\nWhere D(a, b) is some scalar distance function (We use KL-divergence for softmax output layers anc. L2-norm otherwise), fround(x) is the output of the Rounding Network, ftrue(x) is the output of the. Original Network. Lcomp is the computational loss, defined as the total number of additions required. in a forward pass. Each layer performs |st|L1di+1 additions, where st is the discrete output of the l'th. layer, di+1 is the dimensionality of the (l + 1)'th layer. Finally X is the tradeoff parameter balancing the importance of the two losses.\nWe aim to use this loss function to optimize our layer-scales, ky to find an optimal tradeoff between accuracy and computation, given the tradeoff parameter X.\nis functionally identical to the Original Network. Discretizing the output of the temporal difference. modules, as discussed in Section 3.2[ leads to the Sigma-Delta network. The Sigma-Delta Network is functionally equivalent to the Rounding network, except that it requires less computation per forward. pass if it is fed with temporally redundant data..\nround(x, k) = round(x. k)/k\nCerror = D(fround(x), ftrue(x) L-1 Lcomp=>|st|L1di+1 l=1 Ltotal = Lerror + ALcomp"}, {"section_index": "7", "section_name": "4.3 DIFFERENTIATING THE UNDIFFERENTIABLE", "section_text": "We run into an obvious problem: y = round(k . x) is not differentiable with respect to our scale, k or our input, x. We get around this by using a similar method to Courbariaux et al.] who in turn borrowed it from a lecture by Hinton (2012). That is, on the backward pass, when computing the Lerror, we simply pass the gradient through the rounding function gradient with respect to the error dk1 in layers [l + 1, .., .L], i.e. we say round(x) ~ 1.\nLcomp, we again just pass the When computing the gradient with respect to the computational cost,. ak1 gradient through all rounding operations in the backward pass for layers [l + 1,.., .L]. We foun. instabilities in training when using the computational loss of higher layers: comp,l' : l' E [l + 1, .., L. to update the scale of layer l. Since we don't expect this term to have much effect anyway, we choose t only use the gradient of the computational cost in layer l when updating scale kt, i.e., we approximate. dLcomp~ dLcom p,l\nOur scale parameters also must remain in the positive range, and stay well away from zero, where they. can cause instability due to the division-by-k (see Equation |5). To handle this, we parametrize our scales in log-space, as Kj = log(kj). Our scale-parameter update rule becomes:.\ndLerror Ak1 = -n SiL1dl+1 dkl dkl pass:[l+1..L] pass:l"}, {"section_index": "8", "section_name": "5 EXPERIMENTS", "section_text": "We start with a very simple toy problem to verify our method. We initialize a 2-layer (100-100-100). ReLU network with random weights using the initialization scheme proposed in Glorot and Bengio. (2010), then scaled the weights by (, 8, ). The weight-rescaling does not affect the function of the. network but makes it very ill-adapted for discretization (the first layer will be represented too coarsely. causing error; the second too finely, causing wasted computation). We create random input data, and use it to optimize the layer scales according to Equation|10 We verify, by comparing to a large collection. of randomly drawn rescalings, that by tuning lambda we land on different places of the Pareto frontier. balancing error and computation. Figure|2|shows that this is indeed the case. In this experiment, error and computation are evaluated just on the Rounding network - we test the Sigma-Delta network in the next experiment, which includes temporal data.."}, {"section_index": "9", "section_name": "5.2 TEMPORAL-MNIST", "section_text": "In order to evaluate our network's ability to save computation on temporal data, we create a dataset that we call \"Temporal-MNIST\". This is just a reshuffling of the standard MNIST dataset so that similar frames tend to be nearby, giving the impression of a temporal sequence (see AppendixD|for details). The columns of Figure|3|show eight snippets from the Temporal-MNIST dataset.\nWe started our experiment with a conventional ReLU network with layer sizes [784-200-200-10] pre trained on MNIST to a test-accuracy of 97.9%. We then apply the same scale-optimization procedure. for the Rounding Network used in the previous experiment to find the optimal rescalings under a range of values for X. This time, we test the learned scale parameters on both the Rounding Network and the Sigma-Delta network. We do not attempt to directly optimize the scales with respect to the amount of computation in the Sigma-Delta network - we assume that the result should be similar to that for the rounding network, but verifying this is the topic of future work..\nThe results of this experiment can be seen in Figure 4] We see that our discretized networks (Round. ing and Sigma-Delta) converge to the error of the original network with fewer computations than are required for a forward pass of the original neural network. Note that the errors of the rounding and.\nWhere sj is the rounded signal from layer l, d+1 is the \"fan-out' (equivalent to the dimension of layer l + 1 in a fully-connected network), and pass : [l + 1..L] indicates that, on the backward pass, we simply pass the gradient through the rounding functions on layers l + 1..L.\n1.0 =1e-07 =1e-06 15 =1e-05 0.8 =0.0001 =0.001 =0.01 0.6 10 Error S 0.4 5 0.2 0 0.0E 0 1 2 0 50 100 150 200 250300350 400 Layer kOps/sample\nFigure 2: The Results of the \"Random Network\" experiment de scribed in Section |5.1] Left: A plot of the layerwise scales. Grey. lines show randomly sampled scales, and coloured lines show op-. timal scales for different values of X. Right: Gray dots show the error-scale tradeoffs of network instantiations using the (gray) ran domly sampled rescalings on the left. Coloured lines show the op. timization trajectory under different values of X, starting with the initial state (o), and ending with ..\nMNIST Temporal MNIST MNIST Temporal MNIST 20 Rounding Network (%) taen eaneeaaeon Network Original Network 4 3 50 150 250 350 50 150 250 350100 101 102 100 101 102 kOps/sample kOps/sample nJ/sample nJ/sample\nFigure 4: A visualization of our error-computation tradeoff curve for MNIST and our Temporal-mnist. dataset. Plot 1: Each point on the line for the Rounding (blue) and Sigma-Delta (green) network. correspond to the performance of the network for a different value of the error-computation tradeoff. parameter , ranging from = 10-10 (in the high-computation, low-error regime) to X = 10-5 (in the low-computation, high-error regime). The red line indicates the performance of the original, non-. discretized network. The red dot on the right indicates the number of flops required for a full forward. pass when doing dense multiplication, and the dot on the left indicates the number of flops when. factoring in layer sparsity. Note that for the Rounding and Sigma-Delta networks, we count the number. of additions, and for the original network we count the numbers of multiplications and additions (as per Section[3.3). Plot 2: The same, but on the Temporal-MNIST dataset. We see that the Sigma-Delta network uses less computation thanks to the temporal redundancy in the data. Plots 3 and 4: Half of the. original network's Ops were multiplies, which are more computational costly than the additions of the. Rounding and Sigma-Delta networks. In these plots the x-axis is rescaled according to the energy use. calculations of|Horowitz (2014), assuming the weights and parameters of the network are implemented. with 32-bit integer arithmetic. Numbers are in Appendix E.\nTemporal MNIST MM3MM33333 DoC 23 3 333333MM33 X T M 33 : : 3 5\nFigure 3: Some esam- ples from the Temporal- MNIST dataset. Each column shows a snippet of adjacent frames..\nSigma-Delta networks are identical. This is a consequence of their equivalence, described in Section. 3.2 Note also that the errors for all networks are identical between the MNIST and Temporal-MNIST. datasets, since for all networks, the prediction function is independent of the order in which inputs are processed. We see that as expected, our Sigma-Delta network does fewer computations than the. rounding network on the Temporal-MNIST dataset for the same error, because the update-mechanism. of this network takes advantage of the temporal redundancy in the data.."}, {"section_index": "10", "section_name": "5.3 A DEEP CONVOLUTIONAL NETWORK ON VIDEO", "section_text": "Our final experiment is a preliminary exploration into how Sigma Delta networks could perform on natural video data. We start with \"VGG 19\" - a 19 layer convolutional network, trained to recognise the 1o0o ImageNet categories. The network was trained and made public by Simonyan and Zisserman (2014). We take selected videos from the ILSVRC 2015 dataset (Russakovsky et al.J2015), and apply the rescaling method from Section4.1|to adjust the scales on a per-layer basis. We initially had some difficulty in optimizing the scale parameters of network to a stable point. The network would either fail to reduce computation when it could afford to, or reduce it to the point where the network's function was so corrupted that error gradients would be meaningless, causing computation loss to win out and activations to drop to zero. A simple solution was to replace the rounding operation in training with itself into a regime where all activations become zero. More work is need to understand why the addition of noise is necessary here. Figure 5 shows some preliminary results, which indicate that for video data we can get about 4-10x savings in the amount of computation required, in exchange for a modest loss in computational accuracy."}, {"section_index": "11", "section_name": "6 DISCUSSION", "section_text": "We have introduced Sigma-Delta Networks, which give us a new way compute the forward pass oi a deep neural network. In Sigma-Delta Networks, neurons communicate not by telling other neu rons about their current level of activation, but about their change in activation. By discretizing these changes, we end up with very sparse communication between layers. The more similar two consecutive inputs (xt, xt+1) are, the less computation is required to update the network. We show that, while the Sigma-Delta Network's internal state at time-step t depends on past inputs x1..xt-1, the output yt only depends on the current input xt. We show that there is a tradeoff between the accuracy of this network (with respect to the function of a traditional deep net with the same parameters), and the amount of computation required. Finally, we propose a method to jointly optimize error and computation, given a tradeoff parameter that indicates how much accuracy we are willing to sacrifice in exchange for fewei computations. We demonstrate that this method substantially reduces the number of computations re quired to run a deep network on natural, temporally redundant data. However, we observe in our final experiment (Figure[5l bottom) that our assumption that higher-level features would be more temporally stable - and thus require less computation in our Sigma-Delta net - was not true. We suspect that if we were to train the network from scratch on temporal data, we may learn more temporally stable \"slow' features, but this is a topic of future work.\nA huge amount of data (eg. video, audio) comes in the form of temporal sequences, and there is an increasingly obvious need to be able to process this data efficiently. There is much to be gained by. only doing processing when necessary, based on the contents of the data, and we provide one method. for doing that. Further work is needed to determine whether this method would be of use on mod-. ern computing hardware, namely GPUs. The problem is that these devices are designed for large,. fixed-size array operations, and tend not to be good at taking advantage of sparsity in the data, which. requires many random memory accesses to parameters. Fortunately, other devices such as the the IBM. TrueNorth (Cassidy et al.||2013) are being designed which keep memory close to processing, and such. handle sparse data (and random memory access) much more efficiently..\nThis work opens up an interesting door. In asynchronous, distributed neural networks, a node may receive input from many different nodes asynchronously. Recomputing the function of the network every time a new input signals arrives may be prohibitively expensive. Our scheme deals with this by making the computational cost of an update proportional to the amount of change in the input. The next obvious step is to extend this approach to communicating changes in gradients, which may be helpful in setting up distributed, asynchronous schemes for training neural networks.\nCode for our experiments can be found at: https : //github. com/petered/ sigma-delta"}, {"section_index": "12", "section_name": "ACKNOWLEDGMENTS", "section_text": "This work was supported by Qualcomm, who we'd also like to thank for discussing their past work in the field with us. We'd also like to thank fellow lab members, especially Changyong Oh and Matthias. Reisser, for fruitful discussions contributing to this work..\nlesser panda lesser panda howler monkey spider monkey colobus lesser panda lesser panda howler monkey spider monkey spider monkey 40 Round 30 pdPs 20 Original 10 0 100 80 60 40 Round/ Top-1 20 Round/ Top-5 0 0 50 100 150 200 Frame # Layerwise Computational Costs Goeaf/sdme mmNNr Round 1 Original 0.4 punoy:V3 Raato 0.3 0.2 0.1 0.0 5 10 15 Layer #\nFigure 5: A comparison of the Original VGG Net with the Rounding and Sigma-Delta Networks using the same parameters, after scale-optimization. Top: Frames taken from two videos from the ILSVRC2015 dataset. The two videos, with 201 frames in total, were spliced together. The first has a static background, and the second has more motion. Below every second image is the label generated for that image by VGGnet and the Sigma-Delta network (which is functionally equivalent to the Round- ing Network, though numerical errors can lead to small changes, not shown here). Scale parameters were trained on separate videos. Second Plot: A comparison of the computational cost per-frame. The original VGG network has a fixed cost. The Sigma-Delta network has a cost that varies with the amount of action in the video. The spike in computation occurs at the point where the videos are spliced to- gether. We can see that the Sigma-Delta network does more computation for the second video, in which there is more movement. During the first video it performs about 11 times less computation than the Original Network, during the second, about 4 times less. The difference would be more pronounced if we were to count energy use, as we did in Figure4 Third Plot: A plot of the cumulative mean error (over frames) of the Sigma-Delta/Rounding networks, as compared to the Original VGGnet. Most of the time, it gets the same result (Top-1) out of 1000 possible categories. On almost every frame, the guess of the Sigma-Delta network is one of the top-5 guesses of the original VGGNet. Fourth Plot: A breakdown of how much of the computational cost of each network comes from each layer. Fifth Plot: The layer-wise ratio of the computational cost of the Sigma-Delta net to the rounding net. We had expected (and hoped) this ratio to become very low in the upper layers, as the high-level features should not change much between frames. However this was not the case (as the ratio remains between 0.2 and 0.4 across all layers). It appears therefore that our assumption - that higher level features would be"}, {"section_index": "13", "section_name": "REFERENCES", "section_text": "Arash Ardakani, Francois Leduc-Primeau, Naoya Onizawa, Takahiro Hanyu, and Warren J Gross Vlsi implementation of deep neural network using integral stochastic computing. arXiv preprini. arXiv:1509.08972. 2015.\nAndrew S Cassidy, Paul Merolla, John V Arthur, Steve K Esser, Bryan Jackson, Rodrigo Alvarez-Icaza Pallab Datta, Jun Sawada, Theodore M Wong, Vitaly Feldman, et al. Cognitive computing building block: A versatile and efficient digital neuron model for neurosynaptic cores. In Neural Networks (IJCNN), The 2013 International Joint Conference on, pages 1-10. IEEE, 2013.\nXavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Aistats. volume 9. pages 249-256. 2010\nGeoffrey Hinton. Neural networks for machine learning. coursera, video lectures. 2012.\nPeter O'Connor and Max Welling. Deep spiking networks. arXiv preprint arXiv:1602.08323, 2016\nMax Welling. Herding dynamical weights to learn. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 1121-1128. ACM, 2009.\nYoung C Yoon. Lif and simplified srm neurons encode signals into spikes via a form of asynchronous pulse sigma-delta modulation. 2016.\nDavide Zambrano and Sander M Bohte. Fast and efficient asynchronous neural computation with adapting spiking neural networks. arXiv preprint arXiv:1609.02053, 2016\nMatthieu Courbariaux, Itay Hubara, COM Daniel Soudry, Ran El- Yaniv, and Yoshua Bengio. Binarized neural networks: Training neural networks with weights and activations constrained to+ 1 or-\nSteven K Esser, Paul A Merolla, John V Arthur, Andrew S Cassidy, Rathinakumar Appuswamy,. Alexander Andreopoulos, David J Berg, Jeffrey L McKinstry, Timothy Melano, Davis R Barch et al. Convolutional networks for fast, energy-efficient neuromorphic computing. arXiv preprint. arXiv:1603.08270, 2016.\nAlgorithm 3 Herding. Algorithm 4 Delta-Herding 1: Internal: E Rd 0 1: Internal: Slast E Id 0 2: Input: xt E Rd 2: Input: xt E Rd 3$+xt 3: sround(xt) 4: 55-Slast 4: 3round() 5: Slast s 5:-s 6: Return: s'e Id 6: Return: s'e Id\nHere we prove that Algorithm 4|is equivalent to applying Algorithm 3 to the output of a temporal difference modules. i.e. herd((xt)) = T(round(xt))Vt\nFirst start by observing the following equivalence\nSt =round($t-1+ xt) E l t=$t-1+xt)-St 1 Ot 2\nt xr = t=1 t=1\nWhich can be rearranged to solve for St\nIn previous work (O'Connor and Welling2016), we used a quantization scheme which we refer to as herding for brevity and because of its relation to the deterministic sampling scheme in (Welling2009), but could otherwise be called Discrete-Time Bidirectional Sigma-Delta Modulation. The procedure is described in Algorithm[3] The input is summed into a potential $ over time until crossing a quantization threshold (in this case the at which the round function changes value), and then resets.\n1 6 = round(a) > [a- b 6 E I 2\nt t 1 lg| = Xt ST 2 t=1 t=1 t t S-. = round X T t=1 t=1\nt t-1 St = round XT - round X T T=0 T=0\nt-1 > St = round - round UT =0 T=0\nLeaving us with the Delta-Herding algorithm (Algorithm|4\nTherefore, if we have a linear function w(x), and make use of Equation[1] then we can see that the following is true:\nT(w(herd(T(x)))) = T(w(T(round(x)))) = w(T(T(round(x))) = w(round(x)"}, {"section_index": "14", "section_name": "B CALCULATING FLOPS", "section_text": "When computing the number of operations required for a forward pass, we only account for the matrix products/convolutions (which form the bulk of computation in done by a neural network), and no hidden layer activations.\nFor the non-discretized network, the number of flops for a single forward pass of a single data poin through the network, the flop count is:\nIt can be argued that this is an unfair way to count the number of computations done by the non discretized network because of the sparsity of the input layer (due to the zero-background of datasets like MNIST) and the hidden layers (due to ReLU units). Thus we also compute the number of opera tions for the non-discretized network when factoring in sparsity. The equation is:\nL-1 Ni Ni Flopssparse = >`([ai]i 0) di+1+ ([ai]i F 0) l=0 \\i=0 i=0 L-1 Ni =2[ai]i0)di+1 l=0 i=0\nWhere a are the layer activations N is the number of units in layer l and ([ai] 0) is 1 if unit i in layer l has nonzero activation and O otherwise\nFor the rounding networks, we count the total absolute value of the discrete activations\nWhere si is the discrete activations of layer l. This corresponds to the number of operations that woul be required for doing a dot product with the \"sequential addition\"' method described in Section|3.2\nFinally, the Sigma-Delta network required slightly fewer flops, because the bias only need to be added once (at the beginning), so its cost is amortized.\nL-1 Ni nFlopse = [st]i| di+1 l=0 i=0\nWe compute the number of operations required for a forward pass of a fully connected network as follows:\nL-1 L-1 nFlopsdense => (d di+1+ (d-1) di+1+ di+1) = 2> d di+1 l=0 l=0\nWhere dj is the dimensionality of layer l (with l = 0 indicating the input layer). The first term counts the number of multiplications, the second the number of additions for reducing the dot-product, and the third the addition of the bias.\nNi Ni 0S sparse `([ai]i 0) d+1+ (a1 l=0 \\i=0 i=0 (23 L-1 Ni =2([ai]i0)di+1 l=0 i=0\nL-1 Ni Flops Round - |[si]i| d+1+ di+1 l=0 i=0\nIn Section4.1 we mention that we can \"bake the scales into the parameters\"' for ReLU networks. Here we explain that statement\nSuppose you have a function\nIf our nonlinearity h is homogeneous (i.e. k : h(x) = h(k . x)), as is the case for relu(x) = max(0, x) we can collapse the scales k into the parameters:.\nf(x) = k2.relu(x.w/k1 + b) relu(x.w. k2/k1 + k2. b\nSo that after training scales, for a given network, we can simply incorporate them into the parameters as: w' = w : k2/k1, and b' = k2 : b\nThe Temporal MNIST dataset is a version of MNIST that is reshuffled so that similar frames end up being nearby. We generate this by iterating through the dataset, keeping a fixed-size buffer of can didates for the next frame. On every iteration, we compare all the candidates to the current frame and select the closest one. The place that this winning candidate occupied in the buffer is then fillec by a new sample from the dataset, and the winning candidate becomes the current frame. The pro cess is repeated until we've sorted though all frames in the dataset. Code for generating the datase\nW f(x)=k2h\nan be found at: https://github.com/petered/sigma-delta/blob/master/sigma elta/temporal mnist.py.\nTable 1: Results on the MNIST and Temporal-MNIST datasets. MFlops indicates the number of oper- ations done by each network. For the Original Network, the number of Flops is considered when using both (Dense / Sparse) matrix operations. The \"Class Error' column shows the classification error on. the training / test set respectively. The \"Energy\"' is an estimate of the average energy that would be used by arithmetic operations per sample, if the network were implemented with all integer values. This is based on the estimates of Horowitz (2014). Again, for the Original Network, the figure is based on the numbers for dense/sparse matrix operations..\nWe had initially expected that, when a convolutional network is tasked with processing subsequen. frames of video, high-level features would change much more slowly than the pixels and low-level fea. tures. This would give a computational advantage to our Sigma-Delta networks, whose computational. cost scales with the amount of change in the feature representations. To our surprise, this appeared not. to be the case. See the final plot of Figure[5 To verify that this was a property of the original convolu. tional network (and not somehow related our discretization scheme), we take the same snippet of videc. used for Figure 5 and measure the inter-frame differences. Figure [6|shows the results of this small experiment, and confirms that our initial belief - that inter-frame differences should become smaller and smaller at higher layers, was not quite correct.\nMnist Temp mnist Setting Net Type KFlops Test (ds\\sp) Class error (tr\\ts) Int32-Energy (nJ) KFlops Test (ds\\sp) Class error (tr\\ts) Int32-Energy (nJ) Unoptimized Original 397\\107 0.024 \\2.24 636\\173 397\\107 0.024 \\2.24 636\\173 Round 44 2.12 \\ 4.21 4.42 44 2.12 \\4.21 4.42 53 2.12 \\ 4.21 5.32 24 2.12 \\4.21 2.49 X=1e-10 Original 397\\107 0.024 \\ 2.24 636\\173 397\\107 0.024 \\2.24 636\\173 Round 209 0.07 \\2.39 20.9 209 0.07 \\2.39 20.9 245 0.07 \\2.39 24.6 110 0.07 \\2.39 11 X=3.59e-10 Original 397\\107 0.024 \\2.24 636\\173 397\\107 0.024 \\2.24 636\\ 173 Round 206 0.058 \\2.3 20.7 206 0.058 \\2.3 20.7 243 0.058 \\ 2.3 24.3 109 0.058 \\2.3 11 =1.29e-09 Original 397\\107 0.024 \\2.24 636\\173 397\\107 0.024 \\ 2.24 636\\173 Round 178 0.094 \\ 2.42 17.8 178 0.094 \\2.42 17.8 207 0.096 \\2.42 20.7 92 0.094 \\2.42 9.2 X=4.64e-09 Original 397\\107 0.024 \\2.24 636\\173 397\\107 0.024 \\2.24 636\\173 Round 164 0.084 \\2.41 16.4 164 0.084 \\2.41 16.4 193 0.082 \\ 2.41 19.4 87 0.084 \\2.41 8.75 X=1.67e-08 Original 397\\107 0.024 \\ 2.24 636\\173 397\\107 0.024 \\2.24 636\\173 Round 122 0.19 \\2.55 12.2 122 0.19 \\2.55 12.2 144 0.19 \\2.55 14.5 65 0.19 \\ 2.55 6.58 X=5.99e-08 Original 397\\107 0.024 \\2.24 636\\173 397\\107 0.024 \\ 2.24 636\\173 Round 86 0.476 \\2.88 8.66 86 0.476 \\2.88 8.66 102 0.478 \\2.88 10.3 47 0.476 \\ 2.88 4.71 =2.15e-07 Original 397\\107 0.024 \\2.24 636\\173 397\\107 0.024 \\2.24 636\\173 Round 72 1.17 \\3.28 7.21 72 1.17 \\ 3.28 7.21 87 1.18 \\3.28 8.78 41 1.17 \\3.28 4.15 X=7.74e-07 Original 397\\107 0.024 \\2.24 636\\173 397\\107 0.024 \\2.24 636\\173 Round 44 2.32 \\4.26 4.49 44 2.32 \\4.26 4.49 54 2.32 \\4.27 5.46 26 2.32 \\4.26 2.61 X=2.78e-06 Original 397\\107 0.024\\2.24 636\\173 397\\107 0.024 \\2.24 636\\ 173 Round 34 5.91\\7.37 3.49 34 5.91\\7.37 3.49 45 5.9 \\7.37 4.53 23 5.9 \\7.37 2.3 X=1e-05 Original 397\\107 0.024 \\2.24 636\\173 397\\107 0.024 \\2.24 636\\173 Round 24 14.6 \\14.6 2.5 24 14.6\\ 14.6 2.5 35 14.6 \\14.6 3.58 19 14.6 \\14.6 1.98\nInter-Frame L1 Distance Inter-Frame cos Distance 1.8 1.0 0 0 1.6 0.9 0.8 1.4 5 5 1.2 0.7 # # 0.6 1.0 verr yerr 10 10 0.5 0.8 0.4 0.6 0.3 15 0.4 15 0.2 0.2 0.1 0.0 0 50 100 150 0 50 100 150 Frame # Frame # 1.00 1.0 0.98 0.96 0.8 daaeaeee neee 0.94 Norms easI 0.92 0.6 Differences 0.90 So 0.88 0.4 0.86 0.2 0.84 0.82 0 5 10 15 0 5 10 15 Layer # Layer #\nFigure 6: Top-Left: A heatmap showing the L1-distances between the feature representations (post nonlinearity) of adjacent frames from the video in Figure 5 at different layers (rows) and frames (columns). The input is considered to be layer O. Feature representations have been L1-normalized per-layer Bottom Left: The L1-Norms (which are 1 due to the normalization) and inter-frame L1- Distances for each layer, averaged over frames. Top and Bottom Right: The same measurements, with the cosine-similarity metric instead of L1. We note from these plots that the inter-frame difference is not much smaller in higher layers than it is at the pixel level, and that in the lower layers, feature representations of neighbouring frames are significantly more dissimilar than they are at the pixel level."}] |
HJF3iD9xe | [{"section_index": "0", "section_name": "DEEP LEARNING WITH SETS AND POINT CLOUDS", "section_text": "Siamak Ravanbakhsh. Jeff Schneider & Barnabas Poczos"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Recent progress in deep learning (LeCun et al., 2015) has witnessed its application to structurec settings, including graphs (Bruna et al., 2013; Duvenaud et al., 2015) groups (Gens & Domingos 2014; Christopher, 2014; Cohen & Welling, 2016), sequences and hierarchies (Irsoy & Cardie, 2014 Socher et al., 2013). Here, we introduce a simple permutation-equivariant layer for deep learning with set structure, where the primary dataset is a collection of sets, possibly of different sizes. Note that each instance may have a structure of its own, such as graph, image, or another set. In typ ical machine-learning applications, iid assumption implies that the entire data-set itself has a se structure. Therefore, our special treatment of the set structure is only necessary due to multiplic ity of distinct yet homogeneous (data)sets. Here, we show that a simple parameter-sharing schem enables a general treatment of sets within supervised and semi-supervised settings.\nIn the following, after introducing the set layer in Section 2, we explore several novel applications Section 3 studies supervised learning with sets that requires \"invariance\"' to permutation of inputs Section 3.1 considers the task of summing multiple MNIST digits, and Section 3.2 studies an im portant application of sets in representing low-dimensional point-clouds. Here, we show that deep networks can successfully classify objects using their point-cloud representation.\nSection 4 presents numerical study in semi-supervised setting where the output of the multi-lay network is \"equivariant'' to the permutation of inputs. We use permutation-equivariant layer to pe form outlier detection on CelebA face dataset in Section 4.1 and improve galaxy red-shift estimat using its clustering information in Section 5.\nLet xn E I denote use x = [x1,..., x] to denote a vector of xn instances. Here, xn, could be a feature-vector, an image or any other structured object. Our goal is to design neural network layers that are \"indifferent\"' to permutations of instances in x. Achieving this goal amounts to treating x as a \"set\"' rather than a vector.\nThe function f : N. VN is equivariant to the permutation of its inputs iff\nf(zx) = xf(x V E SN\nwhere the symmetric group Sy is the set of all permutation of indices 1, . . ., N. Similarly, the func tion f : RN > Rt is invariant to permutation of its inputs -i.e., a.k.a. a symmetric function (David"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We introduce a simple permutation equivariant layer for deep learning with set structure. This type of layer, obtained by parameter-sharing, has a simple im olementation and linear-time complexity in the size of each set. We use deep permutation-invariant networks to perform point-could classification and MNIST- ligit summation, where in both cases the output is invariant to permutations of he input. In a semi-supervised setting, where the goal is make predictions for each instance within a set, we demonstrate the usefulness of this type of layer n set-outlier detection as well as semi-supervised learning with clustering side information.\nPredicting the sum of digits from three images (28-way classification). Predicting the sum of digits from six images (55-way classification) Set Layer + 1 0} Set Pooling Concatenated (training accuracy, 20% dropout) ..-0.8 >Set Pooling Only Set Layer + Set Pooling (validation accuracy) + 0.6 Concatenated 0.4 0.4 > Stacked Concatenated (validation accuracy,20% dropout) - 0.2 Concatenated (training accuracy,50% dropout) 0.0 Ok 50k 100k 150k 200k Jk 500k 1000k 1500k 2000k 2500k 3000k #mini-batches #mini-batches\nFigure 1: Classification accuracy of different schemes in predicting the sum of a (left) N=3 and (right) N=6 MNIST digits without access to individual image labels. The training set was fixed to 10,O00 sets.\net al., 1966)-iff f(x) = f(x) V E Sy. Here, the action of on a vector x E RN can be represented by a permutation matrix. With some abuse of notation, we use E {0, 1}N N to also denote this matrix.\nGiven two permutation equivariance function f : XN -> yN and g : YN > ZN, their compositior is also permutation-equivariance; this is because g(f(x)) = g(f(x)) = g(f(x)).\nConsider the standard neural network layer\nfo(x) = o(Ox O E nNxN\nO = XI+x(11 X,y E R 1= [1,...,1] enn\nThis function is simply a non-linearity applied to a weighted combination of I) its input Ix and; IIJ. the sum of input values (11 )x. Since summation does not depend on the permutation, the layer is permutation-equivariant. Therefore we can manipulate the operations and parameters in this layer,. for example to get another yariation.\nf(x) = o(XIx - y(max x)1) n\ny = o(xA - 1xmaxT\ny = (+ (x - 1(maxx))r)\nPredicting the sum of digits from three images (28-way classification) Predicting the sum of digits from six images (55-way classification) Set Layer + {0 0} Set Pooling Concatenated (training accuracy,20% dropout) .-0.8 >Set Pooling Only Set Layer + Set Pooling (validation accuracy) 0.6 + 0.6 >Concatenated 131 0.4 0.4 > Stacked 1 Concatenated (validation accuracy,20% dropout) 0.2 Concatenated (training accuracy,50% dropout) 0.0 OK 50k 100k 150k 200k Jk 500k 1000k 1500k 2000k 2500k 3000k #mini-batches #mini-batches\nwhere the max operation over elements of the set is (similar to summation) commutative and using. -y instead of +y amounts to a reparametrization. In practice using this variation performs better in some applications. This may be due to the fact that for = y, the input to the non-linearity is max-normalized.\nFor multiple input-output channels, we may speed up the operation of the layer using matrix multiplication. Suppose we have K input channels -corresponding to K features for each instance in the set- with a set of size N, and K' output channels. Here, x E RN K and fo : RN K > RN K' The permutation-equivariant layer parameters are A, I E K,K' (replacing A and in Eq. (2)). The output of this layer becomes\nFigure 2: Examples for 8 out of 40 object classes (column) in the ModelNet40. Each point-cloud is produce. by sampling 1000 particles from the mesh representation of the original MeodelNet40 instances. Two point clouds in the same column are from the same class. The projection of particles into xy, zy and xz planes ar added for better visualization.\nwhere e RK' is a bias parameter. This final variation of permutation-equivariant layer is simply : fully connected layer where input features are max-normalized within each set..\nWhen applying dropout (Srivastava et al., 2014) to regularize permutation-equivariant layers with. multiple output channels, it is often beneficial to simultaneously dropout the channels for all in stances within a set. In particular, when set-members share similar features, independent dropou. effectively does not regularize the model as the the network learns to replace the missing features. from other set-members.\nIn the remainder of this paper, we demonstrate how this simple treatment of sets can solve novel and non-trivial problems that occasionally have no alternative working solutions within deep learning\nIn a related work Chen et al. (2014) construct deep permutation invariant features by pairwise cou pling of features at the previous layer, where f,,([xi, x]) = [x; - x, x + x] is invariant to transposition of i and j.2\nAs we see shortly in Section 3.1, in the supervised setting, even simple application of set pooling, without max-normalization of Eq. (5) performs very well in practice. However, in semi. supervised setting since there is no pooling operation, the permutation invariant layer requires max normalization in order to obtain the required information about the context of each instance.."}, {"section_index": "3", "section_name": "3.1 PREDICTING THE SUM OF MNIST DIGITS", "section_text": "MNIST dataset (LeCun et al., 1998) contains 70,000 instances of 28 28 grey-scale stamps of digits. in {0, ..., 9}. We randomly sample a subset of N images from this dataset to build 10,o0o \"sets of training and 10,000 sets of validation images, where the set-label is the sum of digits in that se (i.e., individual labels per image is unavailable)..\n2Due to change in the number of features/channels in each layer this approach cannot produce permutation \"equivariant\"' layers. Also, this method requires a graph to guide the multi-resolution partitioning of the nodes which is then used to define pairing of features in each layer.\nWith multiple input-output channels, the complexity of this layer for each set is O(N K K'). Sub. tracting the mean or max over the set also reduces the internal covariate shift (Ioffe & Szegedy 2015) and we observe that for deep networks (even using tanh activation), batch-normalization is. not required.\nThe permutation-equivariant layers that we introduced so far are useful for semi-supervised learning. (or transductive) setting, where we intend to predict a value per each instance in every set. In super-. vised (or inductive) setting the task is to make a prediction for each set (rather than instances within them), and we require permutation \"invariance\"' of f : XN - V. A pooling operation over the. set-dimension can turn any permutation equivariant function f : XN > yN, permutation invariant f(x) = . f(x). Here is any commutative operation such as summation or maximization..\nTable 1: Classification accuracy and the (size of) representation used by different methods on the ModelNet4 dataset.\nmodel instance size representation accuracy set-laver + transformation (ours. 5000 3 point-cloud 90 .3% set-layer (ours) 1000 3 point-cloud 87 1% set-pooling only (ours) 1000 3 point-cloud 83 1% set-layer (ours) 100 x 3 point-cloud 82 2% KNN graph-convolution (ours) 1000 x (3 + 8) directed 8-regular graph 58 2% 3DShapeNets (Wu et al., 2015) 303 voxels (using convolutional deep belief net). 77% DeepPano (Shi et al., 2015) 64 x 160 panoramic image (2D CNN + angle-pooling) 77.64% VoxNet (Maturana & Scherer, 2015) 323 voxels (voxels from point-cloud + 3D CNN) 83.10% MVCNN (Su et al., 2015) 164 164 12 multi-vew images (2D CNN + view-pooling) 90.1% VRN Ensemble (Brock et al., 2016) 323 voxels (3D CNN, variational autoencoder). 95.54% 3D GAN (Wu et al., 2016) 643 voxels (3D CNN, generative adversarial training) 83.3%\nIn our first experiment, each set contains N = 3 images, and the set label is a number betweer 0 < y < 3 * 9 = 27. We then considered four different models for predicting the sum:\nAll models are defined to have similar number of layers and parameters; see Appendix B.1 fo details. The output of all models is a (9N + 1)-way softmax, predicting the sum of N digits\nFigure 1(left) show the prediction accuracy over the validation-set for different models for N = 3. We see that using the set-layer performs the best. However, interestingly, using set-pooling alone produces similarly good results. We also observe that concatenating the digits eventually performs well, despite its lack of invariance. This is because due to the sufficiently large size of the dataset most permutations of length three appear in the training set.\nHowever, as we increase the size of each set to N = 6, permutation invariance becomes crucial; see. Figure 1(right). We see that using the default dropout rate of 20%, the model simply memorizes the input instances (indicated by discrepancy of training/validation error) and by increasing this dropou. rate, the model simply predicts values close to the mean value. However, the permutation-invariant. layer learns to predict the sum of six digits with > 80% accuracy, without having access to individual. image labels. Performance using set-pooling alone is similar.."}, {"section_index": "4", "section_name": "3.2 POINT CLOUD CLASSIFICATION", "section_text": "A low-dimensional point-cloud is a set of low-dimensional vectors. This type of data is fre quently encountered in various applications from robotics and vision to cosmology. In these ap plications, point-cloud data is often converted to voxel or mesh representation at a preprocessing step (e.g., Maturana & Scherer, 2015; Ravanbakhsh et al., 2016; Lin et al., 2004). Since the out- put of many range sensors such as LiDAR - which are extensively used in applications such as autonomous vehicles - is in the form of point-cloud, direct application of deep learning methods tc point-cloud is highly desirable. Moreover, when working with point-clouds rather than voxelized 3D objects, it is easy to apply transformations such as rotation and translation as differentiable layers at low cost.\nHere, we show that treating the point-cloud data as a set, we can use the set-equivariant layer of Eq. (5) to classify point-cloud representation of a subset of ShapeNet objects (Chang et al., 2015) called ModelNet40 (Wu et al., 2015). This subset consists of 3D representation of 9,843 training and 2,468 test instances belonging to 40 classes of objects; see Fig. 2. We produce point-clouds with 100, 1000 and 5000 particles each (x, y, z-coordinates) from the mesh representation of objects using the point-cloud-library's sampling routine (Rusu & Cousins, 2011). Each set is normalized by the initial layer of the deep network to have zero mean (along individual axes) and unit (global\nvariance. Additionally we experiment with the K-nearest neighbor graph of each point-cloud an report the results using graph-convolution; see Appendix B.3 for model details.\nTable 1 compares our method against the competition.3 Note that we achieve our best accuracy using. 5000 3 dimensional representation of each object, which is much smaller than most other methods. All other techniques use either voxelization or multiple view of the 3D object for classification. In terestingly, variations of view/angle-pooling (e.g., Su et al., 2015; Shi et al., 2015) can be interpretec. as set-pooling where the class-label is invariant to permutation of different views. The results alsc shows that using fully-connected layers with set-pooling alone (without max-normalization over the. set) works relatively well.\nWe see that reducing the number of particles to only 100, still produces comparatively good results Using graph-convolution is computationally more challenging and produces inferior results in this setting. The results using 5oo0 particles is also invariant to small changes in scale and rotatior around the z-axis; see Appendix B.3 for details.\nUnits of the first. permutation-invariant Tayer Units of the second permutation-invariant layer\nFigure 3: Each box is the particle-cloud maximizing the activation of a unit at the firs (top) and second (bottom) permutation-equivariant layers of our model. Two images of the same column are two different views of the same point-cloud.\nFeatures. To visualize the features learned by the set layers, we used Adamax (Kingma & Ba, 2014. to locate 1o00 particle coordinates maximizing the activation of each unit.4 Activating the tanh units. beyond the second layer proved to be difficult. Figure 3 shows the particle-cloud-features learned. at the first and second layers of our deep network. We observed that the first layer learns simpl. localized (often cubic) point-clouds at different (x, y, z) locations, while the second layer learns. more complex surfaces with different scales and orientations.."}, {"section_index": "5", "section_name": "4.1 SET ANOMALY DETECTION", "section_text": "The objective here is for the deep model to find the anomalous face in each set, simply by observing. examples and without any access to the attribute values. CelebA dataset (Liu et al., 2015) contains\n3The error-bar on our results is due to variations depending on the choice of particles during test time and i is estimated over three trials.\n4We started from uniformly distributed set of particles and used a learning rate of .01 for Adamax, with first. and second order moment of .1 and .9 respectively. We optimized the input in 10 iterations. The results of. Fig. 3 are limited to instances where tanh units were successfully activated. Since the input at the first layer of. our deep network is normalized to have a zero mean and unit standard deviation, we do not need to constrain the input while maximizing unit's activation..\nIn semi-supervised or transductive learning, some/all instances within each training set are labelled Our goal is to make predictions for individual instances within a test set. Therefore, the permutation equivariant layer leverages the interaction between the set-members to label individual member. Note that in this case, we do not perform any pooling operation over the set dimension of the data\nblack hair &. rosey cheeks attractive & heavy makeup double-chin &. wavyhair black hair &. brown hair attractive & mouth slightly oper\nFigure 4: Each row shows a set, constructed from CelebA dataset, such that all set members except for an outlier, share at least two attributes (on the right). The outlier is identified with a red frame. The model is trained by observing examples of sets and their anomalous members, without access to the attributes. The probability assigned to each member by the outlier detection network is visualized using a red bar at the bottom of each image. The probabilities in each row sum to one. See Appendix B.2 for more examples..\n202,599 face images, each annotated with 40 boolean attributes. We use 64 64 stamps and using these attributes we build 18,000 sets, each containing N = 16 images (on the training set) as follows. after randomly selecting two attributes, we draw 15 images where those attributes are present and a single image where both attributes are absent. Using a similar procedure we build sets on the test images. No individual person's face appears in both train and test sets..\nOur deep neural network consists of 9 2D-convolution and max-pooling layers followed by 3. permutation-equivariant layers and finally a softmax layer that assigns a probability value to each. set member (Note that one could identify arbitrary number of outliers using a sigmoid activation at the output.) Our trained model successfully finds the anomalous face in 75% of test sets. Visu-. ally inspecting these instances suggests that the task is non-trivial even for humans; see Fig. 4. For. details of the model, training and more identification examples see Appendix B.2..\nAs a baseline, we repeat the same experiment by using a set-pooling layer after convolution layers and replacing the permutation-equivariant layers with fully connected layers, with the same number of hidden units/output-channels, where the final layer is a 16-way softmax. The resulting network shares the convolution filters for all instances within all sets, however the input to the softmax is not equivariant to the permutation of input images. Permutation equivariance seems to be crucial here as the baseline model achieves a training and test accuracy of ~ 6.3%; the same as random selection."}, {"section_index": "6", "section_name": "5 IMPROVED RED-SHIFT ESTIMATION USING CLUSTERING INFORMATION", "section_text": "Note that the prediction for each galaxy does not change by permuting the members of the galaxy cluster. Therefore, we can treat each galaxy cluster as a \"set' and use permutation-equivariant layer to estimate the individual galaxy red-shifts\nAn important regression problem in cosmology is to estimate the red-shift of galaxies, corresponding to their age as well as their distance from us (Binney & Merrifield, 1998). Two common types of. observation for distant galaxies include a) photometric and b) spectroscopic observations, where the. latter can produce more accurate red-shift estimates..\nOne way to estimate the red-shift from photometric observations is using a regression model (Con- nolly et al., 1995). We use a multi-layer Perceptron for this purpose and use the more accurate spectroscopic red-shift estimates as the ground-truth. As another baseline, we have a photometric redshift estimate that is provided by the catalogue and uses various observations (including cluster- ing information) to estimate individual galaxy-red-shift. Our objective is to use clustering informa tion of the galaxies to improve our red-shift prediction using the multi-layer Preceptron.\nFor each galaxy, we have 17 photometric features 5 from the redMaPPer galaxy cluster catalog (Rozo. & Rykoff, 2014), which contains photometric readings for 26,111 red galaxy clusters. In this task in contrast to the previous ones, sets have different cardinalities; each galaxy-cluster in this catalog has between ~ 20 - 300 galaxies - i.e., x E N(c)x17, where N(c) is the cluster-size. See Fig. 5(a) for distribution of cluster sizes. The catalog also provides accurate spectroscopic red-shift estimates for. a subset of these galaxies as well as photometric estimates that uses clustering information. Fig. 5(b) reports the distribution of available spectroscopic red-shift estimates per cluster..\nWe randomly split the data into 90% training and 10% test clusters, and use the following simple architecture for semi-supervised learning. We use four permutation-equivariant layers with 128, 128, 128 and 1 output channels respectively, where the output of the last layer is used as red-shift estimate. The squared loss of the prediction for available spectroscopic red-shifts is minimized.6 Fig. 5(c) shows the agreement of our estimates with spectroscopic readings on the galaxies in the test-set with spectroscopic readings. The figure also compares the photometric estimates provided by the catalogue (see Rozo & Rykoff, 2014), to the ground-truth. As it is customary in cosmology 1+zspec surement and z is a photometric estimate. The average scatter using our model is .023 compared to the scatter of .025 in the original photometric estimates for the redMaPPer catalog. Both of these values are averaged over all the galaxies with spectroscopic measurements in the test-set.\nWe repeat this experiment, replacing the permutation-equivariant layers with fully connected layers (with the same number of parameters) and only use the individual galaxies with available spec troscopic estimate for training. The resulting average scatter for multi-layer Perceptron is .026, demonstrating that using clustering information indeed improves photometric red-shift estimates.\nFigure 5: application of permutation-equivariant layer to semi-supervised red-shift prediction using clustering. information: a) distribution of cluster (set) size; b) distribution of reliable red-shift estimates per cluster; c). prediction of red-shift on test-set (versus ground-truth) using clustering information as well as RedMaPPe1 photometric estimates (also using clustering information).."}, {"section_index": "7", "section_name": "CONCLUSION", "section_text": "We introduced a simple parameter-sharing scheme to effectively achieve permutation-equivariance in deep networks and demonstrated its effectiveness in several novel supervised and semi-supervised tasks. Our treatment of set structure also generalizes various settings in multi-instance learning (Ray et al., 2011; Zhou et al., 2009). In addition to our experimental settings, the permutation-invariant layer can be used for distribution regression and classification which have become popular recently (Szabo et al., 2016). In our experiments with point-cloud data we observed the model to be robust to the variations in the number of particles in each cloud, suggesting the usefulness of our method in the general setting of distribution regression - where the number of samples should not qualitatively affect our representation of a distribution. We leave further investigation of this direction to future work.\n5We have a single measurement for each u,g,r, i and z band as well as measurement error bars, location of the galaxy in the sky, as well as the probability of each galaxy being the cluster center. We do not include the information regarding the richness estimates of the clusters from the catalog, for any of the methods, so that baseline multi-layer Preceptron is blind to the clusters.\n6we use mini-batches of size 128, Adam (Kingma & Ba, 2014), with learning rate of .001, 1 = .9 and 32 = .999. All layers except for the last layer use Tanh units and simultaneous dropout with 50% dropout rate\n0.55 0.50 0.45 estamnaass 0.40 0.35 Our photometric estimates 0.30 RedMaPPer photometric estimates 0.25 50 100 200 0.2 0.3 0.4 0.5 0.6 (a) galaxies per cluster (b) spectroscoscopic data availability ground truth (spectroscopic red-shift)e\n0.55 0.50 102 0.45 essanes 0.40 0.35 Our photometric estimates 0.30 RedMaPPer photometric estimates 025 50 100 150 200 250 0.2 0.3 0.4 0.5 0.6 (a) galaxies per cluster (b) spectroscoscopic data availability ground truth (spectroscopic red-shift) per cluster"}, {"section_index": "8", "section_name": "ACKNOWLEDGEMENT", "section_text": "We would like to thank Francois Lanusse for the pointing us to the redMaPPer dataset and the anonymous reviewers as well as Andrew Wagner for valuable feedback"}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "James Binney and Michael Merrifield. Galactic astronomy. Princeton University Press, 1998\nJoan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locally connected networks on graphs. arXiv preprint arXiv:1312.6203. 2013\nXu Chen, Xiuyuan Cheng, and Stephane Mallat. Unsupervised deep haar scattering on graphs. In Advances in Neural Information Processing Systems, pp. 1709-1717, 2014\nDjork-Arne Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289, 2015..\nTaco S Cohen and Max Welling. Group equivariant convolutional networks. arXiv preprint arXiv:1602.07576 2016.\nAJ Connolly, I Csabai, AS Szalay, DC Koo, RG Kron, and JA Munn. Slicing through multicolor space: Galax redshifts from broadband photometry. arXiv preprint astro-ph/9508100, 1995.\nF.N. David, M.S. Kendall, and D.E. Barton. Symmetric Functions and Allied Tables. University Press, 1966\nDavid K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alan Aspuru Guzik, and Ryan P Adams. Convolutional networks on graphs for learning molecular fingerprints. In Advances in Neural Information Processing Systems, pp. 2224-2232. 2015.\nRobert Gens and Pedro M Domingos. Deep symmetry networks. In Advances in neural information processing systems, pp. 2537-2545, 2014.\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to documen recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998\nYann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436-444, 2015.\nHong-Wei Lin, Chiew-Lan Tai, and Guo-Jin Wang. A mesh reconstruction algorithm driven by an intrinsic property of a point cloud. Computer-Aided Design, 36(1):1-9, 2004.\nZiwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceed ings of International Conference on Computer Vision (ICCV), 2015.\nMartin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heteroge neous distributed systems. arXiv preprint arXiv:1603.04467, 2016.\nAndrew Brock, Theodore Lim, JM Ritchie, and Nick Weston. Generative and discriminative voxel modeling with convolutional neural networks. arXiv preprint arXiv:1608.04236, 2016.\nAngel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, et al. Shapenet: An information-rich 3d model reposi- tory. arXiv preprint arXiv:1512.03012, 2015.\nOzan Irsoy and Claire Cardie. Deep recursive neural networks for compositionality in language. In Advances in Neural Information Processing Systems, pp. 2096-2104, 2014.\nEduardo Rozo and Eli S Rykoff. redmapper ii: X-ray and sz performance benchmarks for the sdss catalog. Th Astrophysical Journal, 783(2):80, 2014.\nRadu Bogdan Rusu and Steve Cousins. 3D is here: Point Cloud Library (PCL). In IEEE International Confe ence on Robotics and Automation (ICRA), Shanghai, China, May 9-13 2011.\nBaoguang Shi, Song Bai, Zhichao Zhou, and Xiang Bai. Deeppano: Deep panoramic representation for 3-d shape recognition. IEEE Signal Processing Letters, 22(12):2339-2343, 2015.\nRichard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the conference on empirical methods in natural language processing (EMNLP), volume 1631, pp. 1642. Citeseer, 2013.\nSoumya Ray, Stephen Scott, and Hendrik Blockeel. Multi-instance learning. In Encyclopedia of Machine Learning, pp. 701-710. Springer, 2011.\nZhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaoou Tang, and Jianxiong Xiao. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1912-1920, 2015.\nZhi-Hua Zhou, Yu-Yin Sun, and Yu-Feng Li. Multi-instance learning by treating instances as non-iid samples In Proceedings of the 26th annual international conference on machine learning, pp. 1249-1256. ACM, 2009."}, {"section_index": "10", "section_name": "A PROOFS", "section_text": "From definition of permutation equivariance fe(x) = fe(x) and definition of f in Eq. (1), th condition becomes o(Ox) = o(Ox), which (assuming sigmoid is a bijection) is equivalent t. O = O. Therefore we need to show that the necessary and sufficient conditions for the matri. O E nN N to commute with all permutation matrices E Sn is given by Eq. (2). We prove this i1. both directions:"}, {"section_index": "11", "section_name": "B DETAILS OF MODELS", "section_text": "In the following, all our implementations use Tensorflow (Abadi et al., 2016)"}, {"section_index": "12", "section_name": "B.1 MNIST SUMMATION", "section_text": "All nonlinearities are exponential linear units (ELU Clevert et al., 2015). All models have 4 con volution layers followed by max-pooling. The convolution layers have respectively 16-32-64-128. output channels and 5 5 receptive fields..\nEach pooling, fully-connected and set-layer is followed by a 20% dropout. For models III and I we use simultaneous dropout. In models I and II, the convolution layers are followed by two fully. connected layers with 128 hidden units. In model II, after the first fully connected layer we perforn\nto commute with all permutation matrices E Sn 1s given b We prove this 1n lirections: To see why O = XI + (11) commutes with any permutation matrix, first note that commutativity is linear - that is O1 = O1^ O = O2=> (aO1 + bO2) = (aO1 + bO2) Since both Identity matrix I, and constant matrix 11, commute with any permutation matrix, so does their linear combination O = I + (11). We need to show that in a matrix O that commutes with \"all'' permutation matrices - All diagonal elements are identical: Let g.1 for 1 < k,l < N, k l, be a trans- position (i.e., a permutation that only swaps two elements). The inverse permutation matrix of k,l is the permutation matrix of t,k = %,l. We see that commutativity of O with the transposition ,1 implies that Ok,k = Ot,t: Tk,lO=Okl => Tk,lOk=O = (k,lO,k)ll=Oll = Ok,k=Ol,l Therefore, and O commute for any permutation , they also commute for any trans- position k.1 and therefore Oi.i = X Vi. - All off-diagonal elements are identical: We show that since O commutes with any product of transpositions, any choice two off-diagonal elements should be identical. Let (i, j) and (i', j') be the index of two off-diagonal elements (i.e., i / j and i' j'). Moreover for now assume i i' and j j'. Application of the transposition ,O. swaps the rows i, i' in O. Similarly, Oj,j' switches the jth column with j'th column. From commutativity property of O and E Sn we have Tj,jTi,iO=OTj,jTi,i>Tj,jTi,iO(Tj,jTi,i)-1=O Tj,jTi,iOTi,iTj,j'=O =>(Tj,jTi,iOTi,iTj,j')i,j=Oi,j =>Oij= where in the last step we used our assumptions that i / i', j # j', i / j and i' / j'. In the cases where either i = i' or j = j', we can use the above to show that Oi,j = O,\" and Oi,j' = O\",j\", for some i\" # i, i' and j\" j, j', and conclude Oi,j = Oi,j':\nO1 nO1^O2T TO2 (aO1+bO2) (aO1+bO2)\n=Otk,l Tk,lOTl,k e (k.lOl.k)l.l = Oi.l Ok,k = Oi,l Tk.1O =\nTj',jTi,iO=OTj,jTi,i>Tj,jTi,iO(Tj',jTi,i)-1=O Tj',jTi,i'OTi,ij,j'=O =(Tj',jTi,i'OTj,iTj,j')i,j=Oi,j ; =Oi\nFigure 6: Each row of the images shows a set, constructed from CelebA dataset images, such that all set mem bers except for an outlier, share at least two attributes. The outlier is identified with a red frame. The mode is trained by observing examples of sets and their anomalous members and without access to the attributes The probability assigned to each member by the outlier detection network is visualized using a red bar at the bottom of each image. The probabilities in each row sum to one."}, {"section_index": "13", "section_name": "B.2 FACE OUTLIER DETECTION MODEL", "section_text": "Our model has 9 convolution layers with 3 3 receptive fields. The model has convolution lay. ers with 32, 32, 64 feature-maps followed by max-pooling followed by 2D convolution layers witl. 64, 64, 128 feature-maps followed by another max-pooling layer. The final set of convolution layers. have 128, 128, 256 feature-maps, followed by a max-pooling layer with pool-size of 5 that reduces. the output dimension to batch - size.N 256, where the set-size N = 16. This is then forwarde to three permutation-equivariant layers with 256, 128 and 1 output channels. The output of fina. layer is fed to the Softmax, to identify the outlier. We use exponential linear units (Clevert et al.. 2015), drop out with 20% dropout rate at convolutional layers and 50% dropout rate at the first twc. set layers. When applied to set layers, the selected feature (channel) is simultaneously dropped ii. all the set members of that particular set. We use Adam (Kingma & Ba, 2014) for optimization anc. use batch-normalization only in the convolutional layers. We use mini-batches of 8 sets, for a tota. of 128 images per batch.\nset-pooling followed by another dense layer with 128 hidden units. In the model IV, the convolution layers are followed by a permutation-equivariant layer with 128 output channels, followed by set- pooling and a fully connected layer with 128 hidden units. For optimization, we used a learning rate of .0003 with Adam using the default 1 = .9 and 2 = .999."}, {"section_index": "14", "section_name": "B.3 MODELS FOR POINT-CLOUDS CLASSIFICATION", "section_text": "Set convolution. We use a network comprising of 3 permutation-equivariant layers with 256 chan-. nels followed by max-pooling over the set structure. The resulting vector representation of the set is. then fed to a fully connected layer with 256 units followed by a 40-way softmax unit. We use Tanh activation at all layers and dropout on the layers after set-max-pooling (i.e., two dropout operations). with 50% dropout rate. Applying dropout to permutation-equivariant layers for point-cloud data. deteriorated the performance. We observed that using different types of permutation-equivariant. layers (see Section 2) and as few as 64 channels for set layers changes the result by less than 5% in. classification accuracy.\nGraph convolution. For each point-cloud instance with 1o0o particles, we build a sparse K-nearest neighbor graph and use the three point coordinates as input features. We normalized all graphs at the preprocessing step. For direct comparison with set layer, we use the exact architecture of 3 graph convolution layer followed by set-pooling (global graph pooling) and dense layer with 256 units We use exponential linear activation function instead of Tanh as it performs better for graphs. Due to over-fitting, we use a heavy dropout of 50% after graph-convolution and dense layers. Similar to dropout for sets, all the randomly selected features are simultaneously dropped across the grapl nodes. the We use a mini-batch size of 64 and Adam for optimization where the learning rate is .001 (the same as that of permutation-equivariant counter-part).\nDespite our efficient sparse implementation using Tensorflow, graph-convolution is significantly slower than the set layer. This prevented a thorough search for hyper-parameters and it is quite possible that better hyper-parameter tuning would improve the results that we report here\nFor the setting with 5000 particles, we increase the number of units to 512 in all layers and randomly rotate the input around the z-axis. We also randomly scale the point-cloud by s ~ U(.8, 1./.8). For this setting only, we use Adamax (Kingma & Ba, 2014) instead of Adam and reduce learning rate from .001 to .0005"}] |
S1Bm3T_lg | [{"section_index": "0", "section_name": "COMPOSITIONAL KERNEL MACHINES", "section_text": "Robert Gens & Pedro Domingos\nDepartment of Computer Science & Engineering University of Washington. Seattle. WA 98195. USA\n{rcg,pedrod}@cs.washington.edu"}, {"section_index": "1", "section_name": "INTRODUCTION", "section_text": "he depth of state-of-the-art convnets is a double-edged sword: it yields both nonlinearity for s histicated discrimination and nonconvexity for frustrating optimization. The established trainir. rocedure for ILSVRC classification cycles through the million-image training set more than fifi. imes, requiring substantial stochasticity, data augmentation, and hand-tuned learning rates. On tc. ay's consumer hardware, the process takes several days. However, performance depends heavi. n hyperparameters, which include the number and connections of neurons as well as optimizatic. letails. Unfortunately, the space of hyperparameters is unbounded, and each configuration of hype. arameters requires the aforementioned training procedure. It is no surprise that large organizatior. vith enough computational power to conduct this search dominate this task..\nYet mastery of object recognition on a static dataset is not enough to propel robotics and internet-. scale applications with ever-growing instances and categories. Each time the training set is modified the convnet must be retrained (\"fine-tuned') for optimum performance. If the training set grows linearly with time, the total training computation grows quadratically..\nWe propose the Compositional Kernel Machine (CKM), a kernel-based visual classifier that has the symmetry and compositionality of convnets but with the training benefits of instance-based learning (IBL). CKMs branch from the original instance-based methods with virtual instances, an exponen- tial set of plausible compositions of training instances. The first steps in this direction are promising compared to IBL and deep methods, and future work will benefit from over fifty years of research into nearest neighbor algorithms, kernel methods, and neural networks.\nIn this paper we first define CKMs, explore their formal and computational properties, and compare them to existing kernel methods. We then propose a key contribution of this work: a sum-product function (SPF) that efficiently sums over an exponential number of virtual instances. We then de."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Convolutional neural networks (convnets) have achieved impressive results on re cent computer vision benchmarks. While they benefit from multiple layers that en. code nonlinear decision boundaries and a degree of translation invariance, training. convnets is a lengthy procedure fraught with local optima. Alternatively, a kerne. method that incorporates the compositionality and symmetry of convnets could. learn similar nonlinear concepts yet with easier training and architecture selec. tion. We propose compositional kernel machines (CKMs), which effectively cre. ate an exponential number of virtual training instances by composing transformed. sub-regions of the original ones. Despite this, CKM discriminant functions can. be computed efficiently using ideas from sum-product networks. The ability to. compose virtual instances in this way gives CKMs invariance to translations and. other symmetries, and combats the curse of dimensionality. Just as support vec. tor machines (SVMs) provided a compelling alternative to multilayer perceptrons. when they were introduced, CKMs could become an attractive approach for objec1. recognition and other vision problems. In this paper we define CKMs, explore. their properties, and present promising results on NORB datasets. Experiments. show that CKMs can outperform SVMs and be competitive with convnets in a. number of dimensions, by learning symmetries and compositional concepts from. fewer samples without data augmentation..\nThe key issue in using an instance-based learner on large images is the curse of dimensionality. Even millions of training images are not enough to construct a meaningful neighborhood for a 256 256 pixel image. The compositional kernel machine (CKM) addresses this issue by constructing an ex-. ponential number of virtual instances. The core hypothesis is that a variation of the visual world can be understood as a rearrangement of low-dimensional pieces that have been seen before. For exam- ple, an image of a house could be recognized by matching many pieces from other images of houses from different viewpoints. The virtual instances represent this set of all possible transformations and recombinations of the training images. The arrangement of these pieces cannot be arbitrary, so CKMs learn how to compose virtual instances with weights on compositions. A major contribution. of this work is the ability to efficiently sum over this set with a sum-product function..\nThe set of virtual instances is related to the nonlinear image manifolds described by Simard et al (1992) but with key differences. Whereas the tangent distance accounts for transformations applied. to the whole image, virtual instances can depict local transformations that are applied differently across an image. Secondly, the tangent plane approximation of the image manifold is only accurate near the training images. Virtual instances can easily represent distant transformations. Unlike the explicit augmentation of virtual support vectors in Scholkopf et al.[(1996), the set of virtual instances in a CKM is implicit and exponentially larger.Platt & Allen (1996) demonstrated an early versior of virtual instances to expand the set of negative examples for a linear classifier.."}, {"section_index": "3", "section_name": "2.1 DEFINITION", "section_text": "We define CKMs using notation common to other IBL techniques. The two prototypical instance- based learners are k-nearest neighbors and support vector machines. The foundation for both algo- rithms is a similarity or kernel function K(x, x') between two instances. Given a training set of m labeled instances of the form (xi, Yi) and query xq, the k-NN algorithm outputs the most common label of the k nearest instances:\nm = arg max >`1[c=yi^K(xi,xq) K(xk,x YkNN(X i=1\nm ysvm(xq) = arg max Qi.cK Xi,X C i=1\nwhere a;.c is the weight on training instance x; that contributes to the score of class c\nThe CKM performs the same classification as these instance-based methods but it sums over an ex ponentially larger set of virtual instances to mitigate the curse of dimensionality. Virtual instances. are composed of rearranged elements from one or more training instances. Depending on the de sign of the CKM, elements can be subsets of instance variables (e.g., overlapping pixel patches) o. features thereof (e.g., ORB features or a 2D grid of convnet feature vectors). We assume there is : deterministic procedure that processes each training or test instance x; into a fixed tuple of indexec. elements Ex, = (ei,1, ..., ei,|Ex,|), where instances may have different numbers of elements. The. query instance xg (with tuple of elements Ex.) is the example that is being classified by the CKM. it is a training instance during training and a test instance during testing. A virtual instance z i. represented by a tuple of elements from training instances, e.g. Ez = (e10,5, e71,2, ..., e46,17). Given a query instance xq, the CKM represents a set of virtual instances each with the same numbe. of elements as Exg. We define a leaf kernel KL(ei,j, ei',') that measures the similarity between any. two elements. Using kernel composition (Aronszajn|[1950), we define the kernel between the query. instance xg and a virtual instance z as the product of leaf kernels over their corresponding elements\nscribe how to train the CKM with and without parameter optimization. Finally, we present results on NORB and variants that show a CKM trained on a CPU can be competitive with convnets trained for much longer on a GPU and can outperform them on tests of composition and symmetry, as well as markedly improving over previous IBL methods\nwhere 1 [] equals one if its argument is true and zero otherwise, and xk is the kth nearest training. instance to query xq assuming unique distances. The multiclass support vector machine (Crammer. & Singer! 2001) in its dual form can be seen as a weighted nearest neighbor that outputs the class with the highest weighted sum of kernel values with the query:.\nWe combine leaf kernels with weighted sums and products to compactly represent a sum over kernels. with an exponential number of virtual instances. Just as a sum-product network can compactly rep resent a mixture model that is a weighted sum over an exponential number of mixture components, the same algebraic decomposition can compactly encode a weighted sum over an exponential num ber of kernels. For example, if the query instance is represented by two elements Exg = (eq,1, eq,2). and the training set contains elements {e1, e2, e3, e4, e5, e6}, then.\nw1KL(eq,1,e1) + w2KL(eq,1, e2) + w3KL(eq,1,e3)] [w4KL(eq,2,e4) +w5KL(eq,2,e5) + w6KL(eq,2,e6)]\nexpresses a weighted sum over nine virtual instances using eleven additions/multiplications in stead of twenty-six for an expanded flat sum w1Kt(eq,1, e1)Kt(eq,2, e4) + ... + wgKt(eq,1, e3 Kt (eq,2, e6). If the query instance and training set contained 100 and 10000 elements, respectively then a similar factorization would use O(106) operations compared to a naive sum over 10500 virtual instances. Leveraging the Sum-Product Theorem (Friesen & Domingos|2016), we define CKMs to allow for more expressive architectures with this exponential computational savings.\nA compositiondl kernel mdcnine (( KM) is defined recursively 1. A leaf kernel over a query element and a training set element is a CKM. 2. A product of CKMs with disjoint scopes is a CKM.. 3. A weighted sum of CKMs with the same scope is a CKM..\nThe scope of an operator is the set of query elements it takes as inputs; it is analogous to the receptive field of a unit in a neural network, but with CKMs the query elements are not restricted to being pixels on the image grid (e.g., they may be defined as a set of extracted image features). A leaf kernel has singleton scope, internal nodes have scope over some subset of the query elements, and the root node of the CKM has full scope of all query elements Exg. This definition allows for rich CKM architectures with many layers to represent elaborate compositions. The value of each sum node child is multiplied by a weight wk,c and optionally a constant cost function $(ei,j, e',j') that rewards certain compositions of elements. Analogous to a multiclass SVM, the CKM has a separate set of weights for each class c in the dataset. The CKM classifies a query instance as yckm(xq) = arg max, Sc(xq), where Sc(xq) is the value of the root node of the CKM evaluating query instance xg using weights for class c.\nCorollary 1. Sc(xg) can sum over the set of virtual instances in time linear in the size of the SPF\nProof. For each query instance element eq,j we define a discrete variable Z, with a state for each. training element e',j' found in a leaf kernel Kt(eq,j, e',j') in the CKM. The Cartesian product of. the domains of the variables Z defines the set of virtual instances represented by the CKM. Sc(xq is a SPF over semiring (R, , , 0,1), variables Z, constant functions w and , and univariate functions Kt(eq,j, Zj). With the appropriate definition of leaf kernels, any semiring can be used.. The definition above provides that the children of every product node have disjoint scopes. Constant functions have empty scope so there is no intersection with scopes of other children. With all product. nodes decomposable, Sc(xq) is a decomposable SPF and can therefore sum over all states of Z, the. virtual instances, in time linear to the size of the CKM..\nSpecial cases of CKMs include multiclass SVMs (flat sum-of-products) and naive Bayes nearest neighbor (Boiman et al.]2008) (flat product-of-sums). A CKM can be seen as a generalization of an image grammar (Fu|1974) where terminal symbols corresponding to pieces of training images are scored with kernels and non-terminal symbols are sum nodes with a production for each child product node.\nThe weights and cost functions of the CKM control the weights on the virtual instances. Each virtual instance represented by the CKM defines a tree that connects the root to the leaf kernels over its unique composition of training set elements. If we were to expand the CKM into a flai sum (cf. Equation 1), the weight on a virtual instance would be the product of the weights and cos1 functions along the branches of its corresponding tree. These weights are important as they car prevent implausible virtual instances. For example, if we use image patches as the elements and allow all compositions, the set of virtual instances would largely contain nonsense noise patterns. If\nthe elements were pixels, the virtual instances could even contain arbitrary images from classes no1 present in the training set. There are many aspects of composition that can be encoded by the CKM. For example, we can penalize virtual instances that compose training set elements using different symmetry group transformations. We could also penalize compositions that juxtapose elements that. disagree on the contents of their borders. Weights can be learned to establish clusters of elements and reward certain arrangements. In Section|3|we demonstrate one choice of weights and cost functions in a CKM architecture built from extracted image features.."}, {"section_index": "4", "section_name": "2.2 LEARNING", "section_text": "For weight learning, we use block-coordinate gradient descent to optimize leave-one-out loss ove the training set. The leave-one-out loss on a training instance x; is the loss on that instance made b the learner trained on all data except xy. Though it is an almost unbiased estimate of generalizatio. error (Luntz & Brailovsky|1969), it is typically too expensive to compute or optimize with non-IBI methods (Chapelle et al.2002). With CKMs, caching the SPFs and efficient data structures mak it feasible to compute exact partial derivatives of the leave-one-out loss over the whole training set We use a multiclass squared-hinge loss\nL(xi,Yi) = max 1 + Syi(xi), Best incorrect class True class\nfor the loss on training instance x; with true label yi and highest-scoring incorrect class y'. We use the squared version of the hinge loss as it performs better empirically and prioritizes updates to element weights that led to larger margin violations. In general, this objective is not convex as it involves the difference of the two discriminant functions which are strictly convex (due to the choice of semiring and the product of weights on each virtual instance). In the special case of the sum-product semiring and unique weights on virtual instances the objective is convex as is true for L2-SVMs. Convnets also have a non-convex objective, but they require lengthy optimization tc perform well. As we show in Section 3] CKMs can achieve high accuracy with uniform weights which further serves as good initialization for gradient descent.\nFor each epoch, we iterate through the training set, for each training instance x; optimizing the block of weights on those branches with Ex, as descendants. We take gradient steps to lower the leave- an early stopping condition. A component of the gradient of the squared-hinge loss on an instance takes the form\nwhere (xi, yi) = 1 + Sy'(x) Sy, (x). We compute partial derivatives @Sc(xi)with backprop OWk, c agation through the SPF. For efficiency, terms of the gradient can be set to zero and excluded from backpropagation if the values of corresponding leaf kernels are small enough. This is either exact (e.g., if is maximization) or an approximation (e.g., if is normal addition).\nCKMs have several scalability advantages over convnets. As mentioned previously, they do no require a lengthy training procedure. This makes it much easier to add new instances and categories Whereas most of the computation to evaluate a single setting of convnet hyperparameters is sunk ir. training, CKMs can efficiently race hyperparameters on hold-out data (Lee & Moorel1994)..\nThe evaluation of the CKM depends on the structure of the SPF, the size of the training set, and the computer architecture. A basic building block of these SPFs is a sum node with a number of children on the order of magnitude of the training set elements [E|. On a sufficiently parallel\nThe training procedure for a CKM builds an SPF that encodes the virtual instances. There are then. two options for how to set weights in the model. As with k-NN, the weights in the CKM could be set to uniform. Alternatively, as with SVMs, the weights could be optimized to improve generalization and reduce model size\n9Sy'(xi) 2^ if (xi,Yi) >O^c= y dwk,c a L(xi,Yi OSy;(xi) -2/ if (xi,Yi) >O^c= Yi dwk,c Owk,c 0 otherwise\nTable 1: Dataset properties Name #Training Exs. - #Testing Exs. Dimensions Classes Small NORB 24300-24300 96 x 96 5 NORB Compositions 100-1000 256 x 256 2 NORB Symmetries {50, 100, ..., 12800}-2916 108 x 108 6\nSmall NORB 24300-24300 96 x 96 5 NORB Compositions 100-1000 256 x 256 2 NORB Symmetries {50, 100, ..., 12800}-2916 108 x 108 6\ncomputer, assuming the size of the training set elements greatly exceeds the dimensionality of the. leaf kernel, this sum node will require O(log([D)) time (the depth of a parallel reduction circuit. and O() space.Duda et al.(200o) describe a constant time nearest neighbor circuit that relies or. precomputed Voronoi partitions, but this has impractical space requirements in high dimensions. A. with SVMs, optimization of sparse element weights can greatly reduce model size..\nOn a modest multicore computer, we must resort to using specialized data structures. Hash codes can be used to index raw features or to measure Hamming distance as a proxy to more expensive distance functions. While they are perhaps the fastest method to accelerate a nearest neighbor search the most accurate hashing methods involve a training period yet do not necessarily result in high recall (Torralba et al.2008] Heo et al.]2012). There are many space-partitioning data structure trees in the literature, however in practice none are able to offer exact search of nearest neighbors in high dimensions in logarithmic time. In our experiments we use hierarchical k-means trees (Muja & Lowe 2009), which are a good compromise between speed and accuracy.\nWe test CKMs on three image classification scenarios that feature images from either the small. NORB dataset or the NORB jittered-cluttered dataset (LeCun et al.]2004). Both NORB datasets contain greyscale images of five categories of plastic toys photographed with varied altitudes, az-. imuths, and lighting conditions. Table|1|summarizes the datasets. We first describe the SPN archi-. tecture and then detail each of the three scenarios..\nIn our experiments the architecture of the SPF Sc(xg) for each query image is based on its unique set of extracted ORB features. Like SIFT features, ORB features are rotation-invariant and produce a descriptor from intensity differences, but ORB is much faster to compute and thus suitable for real time applications (Rublee et al.2011). The elements Ex, = (ei,1, ..., e;,|E|) of each image x, are\nThe SPF Sc(xq) maximizes over variables Z = (Z1,..., Z|ExaJ) corresponding to query elements Ex. with states for all possible virtual instances. The SPF contains a unary scope max node for every variable {Z,} that maximizes over the weighted kernels of all possible training elements : (Zj) = z,ec Wz,c O Kt(zj, eq,j). The SPF contains a binary scope max node for all pairs of variables {Zj, Z,'} for which at least one corresponding query element is within the k-nearest spatial neighbors of the other. These nodes maximize over the weighted kernels of all possible combinations of training set elements.\nThis maximizes over all possible pairs of training set elements, weighting the two leaf kernels. by two corresponding element weights and a cost function. We use a leaf kernel for image ele- ments that incorporates both the Hamming distance between their features and the Euclidean dis-. tance between their image positions: Kt(ei,j, e',j') = max(o - 1dHam(f(ei,j), f(ei',j')), 0) +. max(2|(p(ei,j), p(ei',z')|l, B3). This rewards training set elements that look like a query instance. element and appear in a similar location, with thresholds for efficiency. This can represent, for ex- ample, the photographic bias to center foreground objects or a discriminative cue from seeing sky at the top of the image. We use the pairwise cost function $(ei,j, ei',j') = 1[i = i']4 that rewards. combinations of elements from the same source training image. This captures the intuition that.\n#Training Exs. - #Testing Exs. Dimensions Classe\nits extracted keypoints, where an element's feature vector and image position are denoted by f (ei,j) and p(e;.i) respectively. We use the max-sum semiring ( = max, = +) because it is more robust to noisy virtual instances, yields sparser gradients, is more efficient to compute, and performs better empirically compared with the sum-product semiring.\nD(Zj,Zj) = W zi c8Wz 1,c O$(Zj,Zj') KL(Zj,eq,j) 8 KL(Zj',eq,j' zj EE zj1EE\nD(Zj,Zj)= Wzz,c 8 Wzz,,c 8 $(zj,zj') KL(zj,eq,j) 8 Kz(zj',eq,j') zj EE zj1EE\ncompositions sourced from more images are less coherent and more likely to contain nonsense tha those using fewer. The image is represented as a sum of these unary and binary max nodes. Th. scopes of children of the sum are restricted to be disjoint, so the children {(Z1, Z2), (Z2, Z3). would be disallowed, for example. This restriction is what allows the SPF to be tractable, and witl. multiple sums the SPF has high-treewidth. By comparison, a Markov random field expressing thes. dependencies would be intractable. The root max node of the SPF has P sums as children, each o. which has its random set of unary and binary scope max node children that cover full scope Z. W. illustrate a simplified version of the SPF architecture in Figure[1 Though this SPF models limitec image structure, the definition of CKMs allows for more expressive architectures as with SPNs..\nZ ={Z1,Z2,Z3,Z4} {Z1} D{Z2,Z3} Z4 W1 Wm,En W1. W1 Wm.Em W1,1 Wm|Em Wm|Em KLKK KKKL KLKK KLKK e1,1 e12 em|Em e1,1 e12 em|Em e11 e12 em,|Em| e1,1 e12 em|Em| eq,1 eq,3 eq,4 query image.\nFigure 1: Simplified illustration of the SPF Sc(xq) architecture with max-sum semiring used in. experiments (using ORB features as elements, Ex. ~ 100). Red dots depict elements Ex. of query. instance xg. Blue dots show training set elements ei,j E , duplicated with each query element for. clarity. A boxed Kt shows the leaf kernel with lines descending to its two element arguments. The nodes are labeled with their scopes. Weights and cost functions (arguments omitted) appear next to nodes. Only a subset of the unary and binary scope nodes are drawn. Only two of the P. top-level nodes are fully detailed (the children of the second are drawn faded).\nIn the following sections, we refer to two variants CKM and CKMw. The CKM version uses uniform weights wk,c, similar to the basic k-nearest neighbor algorithm. The CKMw method opti-. mizes weights wk,c as described in Section2.2 Both versions restrict weights for class c to be 0. ( identity) for those training elements not in class c. This constraint ensures that method CKM is discriminative (as is true with k-NN) and reduces the number of parameters optimized by CKMy.. The hyperparameters of ORB feature extraction, leaf kernels, cost function, and optimization were. chosen using grid search on a validation set..\nWith our CPU implementation, CKM trains in a single pass of feature extraction and storage at ~5ms/image, CKM trains in under ten epochs at ~9Oms/image, and both versions test at ~80ms/image. The GPU-optimized convnets train at ~2ms/image for many epochs and test at ~1ms/image. Remarkably, CKM on a CPU trains faster than the convnet on a GPU.\nWe use the original train-test separation which measures generalization to new instances of a cate. gory (i.e. tested on toy truck that is different from the toys it was trained on). We show promising. results in Table2|comparing CKMs to deep and IBL methods. With improvement over k-NN and. SVM, the CKM and CKMw results show the benefit of using virtual instances to combat the curse. of dimensionality. We note that the CKM variant that does not optimize weights performs nearly. as well as the CKM version that does. Since the test set uses a different set of toys, the use of untrained ORB features hurts the performance of the CKM. Convnets have an advantage here be- cause they discriminatively train their lowest level of features and represent richer image structure in their architecture. To become competitive, future work should improve upon this preliminary CKM"}, {"section_index": "5", "section_name": "Method", "section_text": "architecture. We demonstrate the advantage of CKMs for representing composition and symmetry in the following experiments.."}, {"section_index": "6", "section_name": "3.3 NORB COMPOSITIONS", "section_text": "A general goal of representation learning is to disentangle the factors of variation of a signal withou having to see those factors in all combinations. To evaluate progress towards this, we created images containing three toys each, sourced from the small NORB training set. Small NORB contains ten types of each toy category (e.g., ten different airplanes), which we divided into two collections. Each image is generated by choosing one of the collections uniformly and for each of three categories (person, airplane, animal) randomly sampling a toy from that collection with higher probability (P = ) than from the other collection (i.e., there are two children with disjoint toy collections but they sometimes borrow). The task is to determine which of the two collections generated the image. This dataset measures whether a method can distinguish different compositions without having seen all possible permutations of those objects through symmetries and noisy intra-class variation. Analogous tasks include identifying people by their clothing, recognizing social groups by their members, and classifying cuisines by their ingredients.\nWe compare CKMs to other methods in Table[3] Convnets and their features are computed using the TensorFlow library (Abadi et al.|2015). Training convnets from few images is very difficult without resorting to other datasets; we augment the training set with random crops, which still yields test accuracy near chance. In such situations it is common to train an SVM with features extracted by a convnet trained on a different, larger dataset. We use 2048-dimensional features extracted from the penultimate layer of the pre-trained Inception network (Szegedy et al.|2015) and a linear kernel SVM with squared-hinge loss (Pedregosa et al.]2011). Notably, the CKM is much more accurate than the deep methods, and it is about as fast as the SVM despite not taking advantage of the GPU.\nD101 NIODD\nFigure 2: Images from NORB Compositions\nComposition is a useful tool for modeling the symmetries of objects. When we see an image of ar object in a new pose, parts of the image may look similar to parts of images of the object in poses we have seen before. In this experiment, we partition the training set of NORB jittered-cluttered into a"}, {"section_index": "7", "section_name": "Accuracy", "section_text": "new dataset with 10% withheld for each of validation and testing. Training and testing on the same group of toy instances, this measures the ability to generalize to new angles, lighting conditions. backgrounds, and distortions.\nWe vary the amount of training data to plot learning curves in Figure[3] We observe that CKMs are better able to generalize to these distortions than other methods, especially with less data. Impor- tantly, the performance of CKM improves with more data, without requiring costly optimization as. data is added. We note that the benefit of CKMw using weight learning becomes apparent with 200. training instances. This learning curve suggests that CKMs would be well suited for applications in cluttered environments with many 3D transformations (e.g., loop closure)..\n100% CKMw CKM SVM with convnet features Convnet 75% k-NN ACeenecv 50% 25% 0% 50 200 800 3200 1280 Training Instances\nFigure 3: Number of training instances versus accuracy on unseen symmetries in NORB"}, {"section_index": "8", "section_name": "4 CONCLUSION", "section_text": "This paper proposed compositional kernel machines, an instance-based method for object recog. ition that addresses some of the weaknesses of deep architectures and other kernel methods. W. howed how using a sum-product function to represent a discriminant function leads to tractabl. ummation over the weighted kernels to an exponential set of virtual instances, which can mitigat he curse of dimensionality and improve sample complexity. We proposed a method to discrimina. ively learn weights on individual instance elements and showed that this improves upon uniforr. veighting. Finally, we presented results in several scenarios showing that CKMs are a significan. mprovement for IBL and show promise compared with deep methods..\nFuture research directions include developing other architectures and learning procedures for CKMs. integrating symmetry transformations into the architecture through kernels and cost functions, and applying CKMs to structured prediction, regression, and reinforcement learning problems. CKMs exhibit a reversed trade-off of fast learning speed and large model size compared to neural networks. Given that animals can benefit from both trade-offs, these results may inspire computational theories of different brain structures, especially the neocortex versus the cerebellum (Ito]2012)."}, {"section_index": "9", "section_name": "ACKNOWLEDGMENTS", "section_text": "The authors are grateful to John Platt for helpful discussions and feedback. This research was partly. supported by ONR grant N00014-16-1-2697, AFRL contract FA8750-13-2-0019, a Google PhD Fellowship, an AWS in Education Grant, and an NVIDIA academic hardware grant. The views anc. conclusions contained in this document are those of the authors and should not be interpreted a. necessarily representing the official policies, either expressed or implied, of ONR, AFRL, or the United States Government."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vin- cent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Watten- berg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URLhttp : / /tensorf1ow . org/ Software available from. tensorflow.org.\nNachman Aronszajn. Theory of reproducing kernels. Transactions of the American Mathematical Society, 68(3):337-404, 1950\nYoshua Bengio and Yann LeCun. Scaling learning algorithms towards AI. Large-Scale Kerne Machines, 34(5), 2007.\nRichard O Duda, Peter E Hart, and David G Stork. Pattern Classification. John Wiley & Sons, 2000\nKing Sun Fu. Syntactic Methods in Pattern Recognition, volume 112. Elsevier, 1974\nMasao Ito. The Cerebellum: Brain for an Implicit Self. FT press, 2012\nYann LeCun, Fu Jie Huang, and Leon Bottou. Learning methods for generic object recognition with invariance to pose and lighting. In Computer Vision and Pattern Recognition (CVPR), IEEE Conference on, volume 2, pp. 97-104. IEEE, 2004\nFabian Pedregosa, Gael Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincnet Dubourg, Jake Vanderplas Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and Edouard Duch- esnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12 2825-2830, 2011.\nAbram L Friesen and Pedro Domingos. The sum-product theorem: A foundation for learning tractable models. In Proceedings of the 33rd International Conference on Machine Learning 2016.\nJohn C Platt and Timothy P Allen. A neural network classifier for the I1o0o OCR chip. In Advances in Neural Information Processing Systems 9, pp. 938-944, 1996\nBernhard Scholkopf, Chris Burges, and Vladimir Vapnik. Incorporating invariances in support vec tor learning machines. In Artificial Neural Networks (ICANN), pp. 47-52. Springer, 1996.\nPatrice Simard. Yann LeCun, and John S Denker. Efficient pattern recognition using a new transfor mation distance. In Advances in Neural Information Processing Systems 5, 1992.\nChristian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Re-. thinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015\nAntonio Torralba, Rob Fergus, and Yair Weiss. Small codes and large image databases for recogni tion. In Computer Vision and Pattern Recognition (CVPR), IEEE Conference on, pp. 2269-2276 IEEE, 2008."}] |
BJ0Ee8cxx | [{"section_index": "0", "section_name": "HIERARCHICAL MEMORY NETWORKS", "section_text": "Sarath Chandar*1, Sungjin Ahn1, Hugo Larochelle2,4, Pascal Vincent Gerald Tesauro3, Yoshua Bengio1,4\nMemory networks are neural networks with an explicit memory component that. can be both read and written to by the network. The memory is often addressed in. a soft way using a softmax function, making end-to-end training with backprop. agation possible. However, this is not computationally scalable for applications. which require the network to read from extremely large memories. On the other. hand, it is well known that hard attention mechanisms based on reinforcement learning are challenging to train successfully. In this paper, we explore a form of. hierarchical memory network, which can be considered as a hybrid between hard. and soft attention memory networks. The memory is organized in a hierarchical. structure such that reading from it is done with less computation than soft attention. over a flat memory, while also being easier to train than hard attention over a flat. memory. Specifically, we propose to incorporate Maximum Inner Product Search. (MIPS) in the training and inference procedures for our hierarchical memory net-. work. We explore the use of various state-of-the art approximate MIPS techniques. and report results on SimpleQuestions, a challenging large scale factoid question. answering task."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Until recently, traditional machine learning approaches for challenging tasks such as image caption. ing, object detection, or machine translation have consisted in complex pipelines of algorithms, eacl. being separately tuned for better performance. With the recent success of neural networks and deej. learning research, it has now become possible to train a single model end-to-end, using backprop. agation. Such end-to-end systems often outperform traditional approaches, since the entire mode. is directly optimized with respect to the final task at hand. However, simple encode-decode styl. neural networks often underperform on knowledge-based reasoning tasks like question-answerin, or dialog systems. Indeed, in such cases it is nearly impossible for regular neural networks to store. all the necessary knowledge in their parameters..\nNeural networks with memory (Graves et al.]2014][Weston et al.] 2015b) can deal with knowledg bases by having an external memory component which can be used to explicitly store knowledge The memory is accessed by reader and writer functions, which are both made differentiable s that the entire architecture (neural network, reader, writer and memory components) can be traine end-to-end using backpropagation. Memory-based architectures can also be considered as general izations of RNNs and LSTMs, where the memory is analogous to recurrent hidden states. Howeve they are much richer in structure and can handle very long-term dependencies because once a vecto (i.e., a memory) is stored, it is copied from time step to time step and can thus stay there for a ver long time (and gradients correspondingly flow back time unhampered).\nCorresponding author: apsarathchandar@ gmail.com"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Memory: The memory is an array of cells, each capable of storing a vector. The memory is often initialized with external data (e.g. a database of facts), by filling in its cells with a pre-trained vector representations of that data\nInput module: The input module is to compute a representation of the input that can be used by other modules.\nReader: Given an input and the current state of the memory, the reader retrieves content from the memory, which will then be used by an output module. This often requires comparing the input's representation or a function of the recurrent state with memory cells using some scoring function such as a dot product.\nOutput module: Given the content retrieved by the reader, the output module generates a prediction. which often takes the form of a conditional distribution over multiple labels for the output.\nFor the rest of the paper, we will use the name memory network to describe any model which ha. any form of these five components. We would like to highlight that all the components except the. memory are learnable. Depending on the application, any of these components can also be fixed. Ii this paper, we will focus on the situation where a network does not write and only reads from the. memory.\nIn this paper, we focus on the application of memory networks to large-scale tasks. Specifically, we focus on large scale factoid question answering. For this problem, given a large set of facts and a nat- ural language question, the goal of the system is to answer the question by retrieving the supporting fact for that question, from which the answer can be derived. Application of memory networks tc this task has been studied by Bordes et al.(2015). However, Bordes et al.(2015) depended on key word based heuristics to filter the facts to a smaller set which is manageable for training. Howevei heuristics are invariably dataset dependent and we are interested in a more general solution which can be used when the facts are of any structure. One can design soft attention retrieval mechanisms where a convex combination of all the cells is retrieved or design hard attention retrieval mecha nisms where one or few cells from the memory are retrieved. Soft attention is achieved by using softmax over the memory which makes the reader differentiable and hence learning can be done using gradient descent. Hard attention is achieved by using methods like REINFORCE (Williams 1992), which provides a noisy gradient estimate when discrete stochastic decisions are made by a model.\nBoth soft attention and hard attention have limitations. As the size of the memory grows, sof. attention using softmax weighting is not scalable. It is computationally very expensive, since its. complexity is linear in the size of the memory. Also, at initialization, gradients are dispersed sc. much that it can reduce the effectiveness of gradient descent. These problems can be alleviated b a hard attention mechanism, for which the training method of choice is REINFORCE. However. REINFORCE can be brittle due to its high variance and existing variance reduction techniques are. complex. Thus, it is rarely used in memory networks (even in cases of a small memory)..\nIn this paper, we propose a new memory selection mechanism based on Maximum Inner Product Search (MIPS) which is both scalable and easy to train. This can be considered as a hybrid of sof and hard attention mechanisms. The key idea is to structure the memory in a hierarchical way such that it is easy to perform MIPS, hence the name Hierarchical Memory Network (HMN). HMNs are scalable at both training and inference time. The main contributions of the paper are as follows:\nWriter: The writer takes the input representation and updates the memory based on it. The writer. can be as simple as filling the slots in the memory with input vectors in a sequential way (as often done in memory networks). If the memory is bounded, instead of sequential writing, the writer has to decide where to write and when to rewrite cells (as often done in NTMs)..\nWe explore hierarchical memory networks, where the memory is organized in a hierarchical. fashion, which allows the reader to efficiently access only a subset of the memory.. While there are several ways to decide which subset to access, we propose to pose memory access as a maximum inner product search (MIPS) problem..\nMemory: Instead of a flat array of cells for the memory structure, HMNs leverages a hierarchical memory structure. Memory cells are organized into groups and the groups can further be organized into higher level groups. The choice for the memory structure is tightly coupled with the choice of reader, which is essential for fast memory access. We consider three classes of approaches for the memory's structure: hashing-based approaches, tree-based approaches, and clustering-basec approaches. This is explained in detail in the next section.\nReader: The reader in the HMN is different from the readers in flat memory networks. Flat memory. based readers use either soft attention over the entire memory or hard attention that retrieves a single. cell. While these mechanisms might work with small memories, with HMNs we are more interested in achieving scalability towards very large memories. So instead, HMN readers use soft attentior. only over a selected subset of the memory. Selecting memory subsets is guided by a maximum inner. product search algorithm, which can exploit the hierarchical structure of the organized memory tc retrieve the most relevant facts in sub-linear time. The MIPS-based reader is explained in more. detail in the next section.\nIn HMNs, the reader is thus trained to create MIPS queries such that it can retrieve a sufficient se of facts. While most of the standard applications of MIPs (Ram & Gray2012) Bachrach et al. 2014} Shrivastava & Li]2014) so far have focused on settings where both query vector and database (memory) vectors are precomputed and fixed, memory readers in HMNs are learning to do MIPS by updating the input representation such that the result of MIPS retrieval contains the correct fact(s).\nIn this section, we describe how the HMN memory reader uses Maximum Inner Product Search (MIPS) during learning and inference..\n(K) Xi q\nwhere the argmax(K) returns the indices of the top-K maximum values. In the case of HMNs, l corresponds to the memory and q corresponds to the vector computed by the input module\nA simple but inefficient solution for K-MIPS involves a linear search over the cells in memory b performing the dot product of q with all the memory cells. While this will return the exact resul for K-MIPs, it is too costly to perform when we deal with a large-scale memory. However, ir many practical applications, it is often sufficient to have an approximate result for K-MIPS, trading speed-up at the cost of the accuracy. There exist several approximate K-MIPS solutions in th literature (Shrivastava & Li]2014] [2015] [Bachrach et al.] 2014} Neyshabur & Srebro]2015).\nAll the approximate K-MIPS solutions add a form of hierarchical structure to the memory and visi1 only a subset of the memory cells to find the maximum inner product for a given query. Hashing based approaches (Shrivastava & Li]2014][2015f|Neyshabur & Srebro[2015) hash cells into multiple bins, and given a query they search for K-MIPS cell vectors only in bins that are close to the bin\nWe empirically show that exact MIPs-based algorithms not only enjoy similar convergence as soft attention models, but can even improve the performance of the memory network. Since exact MIPS is as computationally expensive as a full soft attention model, we propose. to train the memory networks using approximate MIPS techniques for scalable memory. access. We empirically show that unlike exact MIPS, approximate MIPS algorithms provide a. speedup and scalability of training, though at the cost of some performance..\nIn this section, we describe the proposed Hierarchical Memory Network (HMN). In this paper HMNs only differ from regular memory networks in two of its components: the memory and the. reader.\nWe begin with a formal definition of K-MIPS. Given a set of points I = {x1, ..., xn} and a query vector q, our goal is to find\nassociated with the query. Tree-based approaches (Ram & Gray2012] Bachrach et al.]2014) create search trees with cells in the leaves of the tree. Given a query, a path in the tree is followed and. MIPs is performed only for the leaf for the chosen path. Clustering-based approaches (Auvolat. et al.] 2015) cluster cells into multiple clusters (or a hierarchy of clusters) and given a query, they. perform MIPS on the centroids of the top few clusters. We refer the readers to (Auvolat et al.]2015 for an extensive comparison of various state-of-the-art approaches for approximate K-MIPS\nwhere C is the indices of top-K MIP candidate cells and M[C] is a sub-matrix of M where the rows are indexed by C.\nOne advantage of using the softmax(K) is that it naturally focuses on cells that would normally. receive the strongest gradients during learning. That is, in a full softmax, the gradients are otherwise more dispersed across cells, given the large number of cells and despite many contributing a small. gradient. As our experiments will show, this results in slower training..\nOne problematic situation when learning with the softmax(K) is when we are at the initial stages of. training and the K-MIPS reader is not including the correct fact candidate. To avoid this issue, we always include the correct candidate to the top-K candidates retrieved by the K-MIPS algorithm effectively performing a fully supervised form of learning.\nDuring training, the reader is updated by backpropagation from the output module, through the subset of memory cells. Additionally, the log-likelihood of the correct fact computed using K- softmax is also maximized. This second supervision helps the reader learn to modify the query such that the maximum inner product of the query with respect to the memory will yield the correct supporting fact in the top K candidate set..\nUntil now, we described the exact K-MIPS-based learning framework, which still requires a linear. look-up over all memory cells and would be prohibitive for large-scale memories. In such scenarios we can replace the exact K-MIPS in the training procedure with the approximate K-MIPS. This is. achieved by deploying a suitable memory hierarchical structure. The same approximate K-MIPS- based reader can be used during inference stage as well. Of course, approximate K-MIPS algorithms. might not return the exact MIPS candidates and will likely to hurt performance, but at the benefit of. achieving scalability.\nWhile the memory representation is fixed in this paper, updating the memory along with the query. representation should improve the likelihood of choosing the correct fact. However, updating the. memory will reduce the precision of the approximate K-MIPS algorithms, since all of them assume that the vectors in the memory are static. Designing efficient dynamic K-MIPS should improve the. performance of HMNs even further, a challenge that we hope to address in future work.."}, {"section_index": "3", "section_name": "3.1 READER WITH CLUSTERING-BASED APPROXIMATE K-MIPS", "section_text": "Clustering-based approximate K-MIPS was proposed in (Auvolat et al.2015) and it has been shown. to outperform various other state-of-the-art data dependent and data independent approximate K-. MIPS approaches for inference tasks. As we will show in the experiments section, clustering-based. MIPS also performs better when used to training HMNs. Hence, we focus our presentation on the. clustering-based approach and propose changes that were found to be helpful for learning HMNs.\nOur proposal is to exploit this rich approximate K-MIPS literature to achieve scalable training and inference in HMNs. Instead of filtering the memory with heuristics, we propose to organize the memory based on approximate K-MIPS algorithms and then train the reader to learn to perform MIPs. Specifically, consider the following softmax over the memory which the reader has to perform for every reading step to retrieve a set of relevant candidates:\nRout = softmax(h(q)MT\nwhere h(q) E Rd is the representation of the query, M E RNd is the memory with N being the defined as follows:\nC = argmax(K) h(q)MI\n(K) L (K) Li argmaxiex argmaxiex lqll lxiIl xi\nWhen all the data vectors x, have the same norm. then MCsS is equivalent to MIPs. However. it is often restrictive to have this additional constraint. Instead, Auvolat et al.(2015) append additional dimensions to both query and data vectors to convert MIPS to MCSS. In HMN terminology, this would correspond to adding a few more dimensions to the memory cells and input representations.\nThe algorithm introduces two hyper-parameters, U < 1 and m E N*. The first step is to scale all the vectors in the memory by the same factor, such that max, ||x |2 = U. We then apply two mappings P and Q, on the memory cells and on the input vector, respectively. These two mappings simply concatenate m new components to the vectors and make the norms of the data points all roughly the same(Shrivastava & Li 2015). The mappings are defined as follows:\nP(x) [x,1/2x,1/2x2,..,1/2x Q(x) [x,0,0,...,0]\nQ(q)'P(xi) argmax argmax Q(q]2:]P(x)\nOnce we convert MIPS to MCsS, we can use spherical K-means (Zhong2005) or its hierarchical version to approximate and speedup the cosine similarity search. Once the memory is clustered then every read operation requires only K dot-products, where K is the number of cluster centroids\nSince this is an approximation, it is error-prone. As we are using this approximation for the learning process, this introduces some bias in gradients, which can affect the overall performance of HMN To alleviate this bias, we propose three simple strategies\nWe empirically investigate the effect of these variations in Section |5.5"}, {"section_index": "4", "section_name": "4 RELATED WORK", "section_text": "Memory networks have been introduced in (Weston et al.|2015b) and have been so far applied tc comprehension-based question answering (Weston et al.1[2015a] [Sukhbaatar et al.12015), large scale question answering (Bordes et al.]2015) and dialogue systems (Dodge et al.|2015). While (Weston et al.2015b) considered supervised memory networks in which the correct supporting fact is given during the training stage, (Sukhbaatar et al.2015) introduced semi-supervised memory networks that can learn the supporting fact by itself.7Kumar et al.] 2015}Xiong et al.2016) introduced Dynamic Memory Networks (DMNs) which can be considered as a memory network with twc types of memory: a regular large memory and an episodic memory. Another related class of model is the Neural Turing Machine (Graves et al.|2014), which uses softmax-based soft attention. Latei (Zaremba & Sutskever|2015) extended NTM to hard attention using reinforcement learning. (Dodge et al.|[2015f Bordes et al.[2015) alleviate the problem of the scalability of soft attention by having\nInstead of using only the top-K candidates for a single read query, we also add top-K candidates retrieved for every other read query in the mini-batch. This serves two pur poses. First, we can do efficient matrix multiplications by leveraging GPUs since all the K-softmax in a minibatch are over the same set of elements. Second, this also helps to decrease the bias introduced by the approximation error. For every read access, instead of only using the top few clusters which has a maximum product with the read query, we also sample some clusters from the rest, based on a prob ability distribution log-proportional to the dot product with the cluster centroids. This also decreases the bias. We can also sample random blocks of memory and add it to top-K candidates.\nan initial keyword based filtering stage, which reduces the number of facts being considered. Our. work generalizes this filtering by using MIPS for filtering. This is desirable because MIPS can be applied for any modality of data or even when there is no overlap between the words in a question and the words in facts.\n[Spring & Shrivastava2016) is the only work that we know of, proposing to use MIPS during learn. ing. It proposes hashing-based MIPS to sort the hidden layer activations and reduce the computatior. in every layer. However, a small scale application was considered and data-independent method like hashing will likely suffer as dimensionality increases. Rae et al. (2016) have also independentl. proposed a model called SAM to use approximate search methods for memory access in NTM-like. architectures. However, our motivation is different. While Rae et al. (2016) focus on architectures. where the memory is written by the controller itself, we focus on handling memory access to larg. external knowledge bases. While both the models fix the memory access mechanism (HMN uses. MIPS and SAM uses NNS), our controller works in a much more constrained setting. Moreover. our experiments suggest that the performance of SAM could be improved using a clustering-basec. approach as in our work, instead of tree/hash-based approaches for memory search used by SAM.."}, {"section_index": "5", "section_name": "5 EXPERIMENTS", "section_text": "In this section, we report experiments on factoid question answering using hierarchical memory networks. Specifically, we use the SimpleQuestions dataset Bordes et al.(2015). The aim of these experiments is not to achieve state-of-the-art results on this dataset. Rather, we aim to propose and analyze various approaches to make memory networks more scalable and explore the achieved tradeoffs between speed and accuracy."}, {"section_index": "6", "section_name": "5.1 DATASET", "section_text": "We use SimpleQuestions (Bordes et al.] 2015) which is a large scale factoid question answering dataset. SimpleQuestions consists of 108,442 natural language questions, each paired with a cor- responding fact from Freebase. Each fact is a triple (subject,relation,object) and the answer to the question is always the object. The dataset is divided into training (75910), validation (10845), and test (21687) sets. Unlike Bordes et al.(2015) who additionally considered FB2M (10M facts) o1 FB5M (12M facts) with keyword-based heuristics for filtering most of the facts for each question we only use SimpleQuestions, with no keyword-based heuristics. This allows us to do a direct com- parison with the full softmax approach in a reasonable amount of time. Moreover, we would like to highlight that for this dataset, keyword-based filtering is a very efficient heuristic since all question. have an appropriate source entity with a matching word. Nevertheless, our goal is to design a general purpose architecture without such strong assumptions on the nature of the data.\nThe softmax arises in various situations and most relevant to this work are scaling methods for large vocabulary neural language modeling. In neural language modeling, the final layer is a softmax distribution over the next word and there exist several approaches to achieve scalability. (Morin & Bengio2005) proposes a hierarchical softmax based on prior clustering of the words into a binary, or more generally n-ary tree, that serves as a fixed structure for the learning process of the model. The complexity of training is reduced from O(n) to O(log n). Due to its clustering and tree structure, it resembles the clustering-based MIPS techniques we explore in this paper. However, the approaches differ at a fundamental level. Hierarchical softmax defines the probability of a leaf node as the product of all the probabilities computed by all the intermediate softmaxes on the way to tha leaf node. By contrast, an approximate MIPS search imposes no such constraining structure on the probabilistic model, and is better thought as efficiently searching for top winners of what amounts to be a large ordinary flat softmax. Other methods such as Noice Constrastive Estimation (Mnih & Gregor2014) and Negative Sampling (Mikolov et al.]2013) avoid an expensive normalization constant by sampling negative samples from some marginal distribution. By contrast, our approach approximates the softmax by explicitly including in its negative samples candidates that likely would have a large softmax value.Jean et al.(2015) introduces an importance sampling approach that considers all the words in a mini-batch as the candidate set. This in general might also not include the MIPS candidates with highest softmax values.\nLet Vg be the vocabulary of all words in the natural language questions. Let Wq be a |Vq * m matrix where each row is some m dimensional embedding for a word in the question vocabulary. This matrix is initialized with random values and learned during training. Given any question, we represent it with a bag-of-words representation by summing the vector representation of each word in the question. Let q ={wi}i=1,\nN J- -log(Rout[fi] i=1\nwhere f, is the index of the correct fact in Wm. We are fixing the memory embeddings to the TransI (Bordes et al.]2013) embeddings and learning only the question embeddings.\nThis model is simpler than the one reported in (Bordes et al.] 2015) so that it is esay to analyze the effect of various memory reading strategies."}, {"section_index": "7", "section_name": "5.3 TRAINING DETAILS", "section_text": "We trained the model with the Adam optimizer (Kingma & Ba]2014), with a fixed learning rat of 0.001. We used mini-batches of size 128. We used 200 dimensional embeddings for the Transl entities, yielding 600 dimensional embeddings for facts by concatenating the embeddings of th subject, relation and object. We also experimented with summing the entities in the triple instea of concatenating, but we found that it was difficult for the model to differentiate facts this way The only learnable parameters by the HMN model are the question word embeddings. The entit distribution in SimpleQuestions is extremely sparse and hence, following Bordes et al.(2015), w also add artificial questions for all the facts for which we do not have natural language questions UnlikeBordes et al.(2015), we do not add any other additional tasks like paraphrase detection tc the model, mainly to study the effect of the reader. We stopped training for all the models when th validation accuracy consistently decreased for 3 epochs."}, {"section_index": "8", "section_name": "5.4 EXACT K-MIPS IMPROVES ACCURACY", "section_text": "In this section, we compare the performance of the full soft attention reader and exact K-MIPs attention readers. Our goal is to verify that K-MIPS attention is in fact a valid and useful attentioi mechanism and see how it fares when compared to full soft attention. For K-MIPS attention, w tried K E 10, 50, 100, 1000. We would like to emphasize that, at training time, along with K candidates for a particular question, we also add the K-candidates for each question in the mini batch. So the exact size of the softmax layer would be higer than K during training. In Table|1 we report the test performance of memory networks using the soft attention reader and K-MIPs attention reader. We also report the average softmax size during training. From the table, it i clear that the K-MIPS attention readers improve the performance of the network compared to sof attention reader. In fact, smaller the value of K is, better the performance. This result suggests tha it is better to use a K-MIPS layer instead of softmax layer whenever possible. It is interesting to se that the convergence of the model is not slowed down due to this change in softmax computation (a shown in Figure[1)\nThis experiment confirms the usefulness of K-MIPS attention. However. exact K-MIPS has th same complexity as a full softmax. Hence, to scale up the training, we need more efficient forms o K-MIPS attention, which is the focus of next experiment.\np h(q)=I Wq[Wi] i=1\nThen. to find the relevant fact from the memory M, we call the K-MIPS-based reader module with h(q) as the query. This uses Equation[3|and|4|to compute the output of the reader Rout. The reader is trained by minimizing the Negative Log Likelihood (NLL) of the correct fact.\nModel Test Acc.. Avg. Softmax Size Full-softmax 59.5 108442 10-MIPS 62.2 1290 50-MIPS 61.2 6180 100-MIPS 60.6 11928 1000-MIPS 59.6 70941 Clustering. 51.5 20006 PCA-Tree 32.4 21108 WTA-Hash 40.2 20008\nTable 1: Accuracy in SQ test-set and average size of memory used. 10-softmax has high performance while using only smaller amount of memory.\nAs mentioned previously, designing faster algorithms for K-MIPS is an active area of research. Au- volat et al.[(2015) compared several state-of-the-art data-dependent and data-independent methods for faster approximate K-MIPS and it was found that clustering-based MIPS performs significantly. better than other approaches. However the focus of the comparison was on performance during the. inference stage. In HMNs, K-MIPS must be used at both training stage and inference stages. To. verify if the same trend can been seen during learning stage as well, we compared three different. approaches:\nClustering: This was explained in detail in section 3\nWTA-Hash: Winner Takes All hashing (Vijayanarasimhan et al.2014) is a hashing-based K-MIPS algorithm which also converts MIPS to MCSS by augmenting additional dimensions to the vectors. This method used n hash functions and each hash function does p different random permutations of the vector. Then the prefix constituted by the first k elements of each permuted vector is used to construct the hash for the vector.\nPCA-Tree: PCA-Tree (Bachrach et al.] 2014) is the state-of-the-art tree-based method, which con verts MIPS to NNS by vector augmentation. It uses the principal components of the data to construc a balanced binary tree with data residing in the leaves\nFor a fair comparison, we varied the hyper-parameters of each algorithm in such a way that the average speedup is approximately the same. Table 1shows the performance of all three methods, compared to a full softmax. From the table, it is clear that the clustering-based method performs significantly better than the other two methods. However, performances are lower when compared to the performance of the full softmax.\nAs a next experiment, we analyze various the strategies proposed in Section|3.1|to reduce the ap proximation bias of clustering-based K-MIPS:\nTop-K: This strategy picks the vectors in the top K clusters as candidates\nSample-K: This strategy samples K clusters, without replacement, based on a probability distri bution based on the dot product of the query with the cluster centroids. When combined with the Top-K strategy, we ignore clusters selected by the Top-k strategy for sampling.\nRand-block: This strategy divides the memory into several blocks and uniformly samples a randor block as candidate.\nWe experimented with 1000 clusters and 2000 clusters. While comparing various training strategies we made sure that the effective speedup is approximately the same. Memory access to facts per. query for all the models is approximately 20,000, hence yielding a 5X speedup..\nResults are given in Table[2] We observe that the best approach is to combine the Top-K and Sample K strategies, with Rand-block not being beneficial. Interestingly, the worst performances correspond to cases where the Sample-K strategy is ignored.\n90 softmax 80 10-softmax 50-softmax 70 100-softmax 1000-softmax E 60 50 40 30 0 5 10 15 20 25 Epochs\nFigure 1: Validation curve for various models. Convergence is not slowed down by k-softmax.\n1000 clusters 2000 clusters Top-K Sample-K rand-block Test Acc. epochs Test Acc. epochs Yes No No 50.2 16 51.5 22 No Yes No 52.5 68 52.8 63 Yes Yes No 52.8 31 53.1 26 Yes No Yes 51.8 32 52.3 26 Yes Yes Yes 52.5 38 52.7 19\nTable 2: Accuracy in SQ test set and number of epochs for convergence"}, {"section_index": "9", "section_name": "6 CONCLUSION", "section_text": "In this paper, we proposed a hierarchical memory network that exploits K-MIPS for its attention based reader. Unlike soft attention readers, K-MIPs attention reader is easily scalable to large memories. This is achieved by organizing the memory in a hierarchical way. Experiments on the SimpleQuestions dataset demonstrate that exact K-MIPS attention is better than soft attention. How. ever, existing state-of-the-art approximate K-MIPS techniques provide a speedup at the cost of some accuracy. Future research will investigate designing efficient dynamic K-MIPS algorithms, where the memory can be dynamically updated during training. This should reduce the approximation bias and hence improve the overall performance."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. Large-scale simple questio. answering with memory networks. arXiv preprint arXiv:1506.02075, 2015\nDiederik P. Kingma and Jimmy Ba.. Adam: A method for stochastic optimization. CoRI abs/1412.6980, 2014.\nAnkit Kumar et al. Ask me anything: Dynamic memory networks for natural language processing CoRR, abs/1506.07285, 2015\nTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word repre sentations in vector space. In International Conference on Learning Representations, Workshop Track, 2013.\nAndriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. arXi preprint arXiv:1402.0030, 2014.\nFrederic Morin and Yoshua Bengio. Hierarchical probabilistic neural network language model. In Robert G. Cowell and Zoubin Ghahramani (eds.), Proceedings of A1STATS, pp. 246-252, 2005.\nSebastien Jean, KyungHyun Cho, Roland Memisevic, and Yoshua Bengio. On using very large target vocabulary for neural machine translation. In Proceedings of ACL,2015, pp. 1-10, 2015.\nJack W Rae, Jonathan J Hunt, Tim Harley, Ivo Danihelka, Andrew Senior, Greg Wayne, Alex Graves, and Timothy P Lillicrap. Scaling memory-augmented neural networks with sparse reads and writes. In Advances in NIPS. 2016.\nParikshit Ram and Alexander G. Gray. Maximum inner-product search using cone trees. KDD '12 pp. 931-939, 2012.\nRyan Spring and Anshumali Shrivastava. Scalable and sustainable deep learning via randomizec hashing. CoRR, abs/1602.08194, 2016.\nSainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end memory networks arXiv preprint arXiv:1503.08895, 2015\nJason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. Towards ai-complete questio. answering: a set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698, 2015a.\nJason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. In Proceedings Of Th International Conference on Representation Learning (ICLR 2015), 2015b. In Press.\nRonald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8:229-256, 1992\nShi Zhong. Efficient online spherical k-means clustering. In Neural Networks, 2005. IJCNN'05 Proceedings. 2005 IEEE International Joint Conference on, volume 5, pp. 3180-3185. IEEE 2005.\nAnshumali Shrivastava and Ping Li. Improved asymmetric locality sensitive hashing (alsh) for. maximum inner product search (mips). In Proceedings of Conference on Uncertainty in Artificial Intelligence (UAI), 2015."}] |
BkLhzHtlg | [{"section_index": "0", "section_name": "LEARNING RECURRENT REPRESENTATIONS FOR HIERARCHICAL BEHAVIOR MODELING", "section_text": "Eyrun Eyjolfsdottir1, Kristin Branson?, Yisong Yue1, & Pietro Perona\nWe propose a framework for detecting action patterns from motion sequences and modeling the sensory-motor relationship of animals, using a generative recurrent neural network. The network has a discriminative part (classifying actions) and a generative part (predicting motion), whose recurrent cells are laterally connected. allowing higher levels of the network to represent high level behavioral phenom ena. We test our framework on two types of tracking data, fruit fly behavior and online handwriting. Our results show that 1) taking advantage of unlabeled se- quences, by predicting future motion, significantly improves action detection per- formance when training labels are scarce, 2) the network learns to represent high level phenomena such as writer identity and fly gender, without supervision, and 3) simulated motion trajectories, generated by treating motion prediction as input to the network, look realistic and may be used to qualitatively evaluate whether the model has learnt generative control rules."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Behavioral scientists strive to decode the functional relationship between sensory input and moto output of the brain (Tinbergen 1963, Moore2002). In particular, ethologists study the natura behavior of animals while neuroscientists and psychologists study behavior in a controlled environ ment, manipulating neural activations and environmental stimuli. These studies require quantitative measurements of behavior to discover correlations or causal relationships between behaviors ove time or between behavior and stimuli; automating this process allows for more objective and precise measurements, and significantly increased throughput (Dell et al.2014} [Anderson & Perona 2014) Many industries are also concerned with automatic measurement and prediction of human behavior for applications such as surveillance, assisted living, sports analytics, and self driving vehicles.\nBehavior is complex and may be perceived at different time-scales of resolution: position, trajec tory, action, activity. While position and trajectory are geometrical notions, action and activity are semantic in nature. The analysis of behavior may therefore be divided into two steps: (a) detectioi and tracking, where the pose of the body over time is estimated, and (b) action/activity detectior and classification, where motion is segmented into meaningful intervals, each one of which is as sociated with a goal or a purpose. Our work focuses on going from (a) to (b), that is to detect anc classify actions from motion trajectories. We use data for which tracking and pose estimation is rel atively simple, which lets us focus on modeling the temporal dynamics of pose trajectories withou worrying about errors stemming from low level feature extraction.\nSupervised learning is a powerful tool for learning action classifiers from expert-labeled examples. (Jhuang et al.|2010f|Burgos-Artizzu et al.[|2012f|Kabra et al.[2013;|Eyjolfsdottir et al.[2014). How ever, it has two drawbacks. First, it requires a lot of training labels which involves time consuming. and painstaking annotation. Second, behavior measurement is limited to actions that a human can. perceive and believes to be important. We propose a framework that takes advantage of both labeled. and unlabeled sequences; it simultaneously learns to predict future motion and detect actions, allow ing the system to learn from fewer expert labels and discover unbiased behavior representations..\nThe framework models the sensory-motor relationship of an agent, predicting motion based on its sensory input and motion history. It can be used to simulate an agent by iteratively feeding motion predictions as input to the network and updating sensory inputs accordingly. A model that can simulate realistic behavior has learnt to emulate the generative control laws underlying behavior which could be a useful tool for behavior analysis (Simon]1996) Braitenberg 1984).\nOur model is constructed with the goal that it will learn to represent and discover behaviors at dif- ferent semantic scales, offering an unbiased way of measuring behavior with minimal human input. Recent work by Berman et al.(2014) and Wiltschko et al.(2015) shows promising results towards unsupervised behavior representation. Compared to their work our framework offers three advan- tages. Our model learns a hierarchical embedding of behavior, can be trained semi-supervised to learn specific behaviors of interest, and our sensory-motor representation enables the model to learn interactive behavior of an agent with other agents and with its environment.\nOur experiments focus mainly on the behavior of fruit flies, Drosophila Melanogaster, a popular. model organism for the study of behavior (Siwicki & Kravitz2009). To explore the generality of. our approach we also test our model on online handwriting data, an interesting human behavior that produces two dimensional trajectories.\nTo summarize our contributions:\nHidden Markov models (HMMs) have been extensively used for sequence classification. The. motivating assumption for HMMs is that there exists a process that transitions with some probability. between discrete states, each of which emits observations according to some distribution, and the objective is to learn these functions given a sequence of observations and states. This model is. limited in that its transition functions are linear, state space is discrete, and emission distribution is. generally assumed to be Gaussian, although generalizations of the model that fall under the category. of dynamic Bayesian networks are more expressive (Murphy2002).\nRecurrent neural networks (RNNs) have recently been shown to be extremely successful in classi fying time series data, especially with the popularization of long short term memory cells (Hochre- iter & Schmidhuber] 1997), in applications such as speech recognition (Graves et al.2013). RNNs have also been used for generative sequence prediction of handwriting (Graves2013) as well as speech synthesis (Chung et al.2015).\nImitation learning involves learning to map a state to an action, from demonstrated sequence of actions. This is a supervised learning technique which, when implemented as an RNN, can b trained via backpropagation using action-error computed at every time step. The problem with thi approach is that the domain of states that an agent is trained on consists only of states that th demonstrators encounter, and when an agent makes a mistake it finds itself in a situation neve experienced during training. Reinforcement learning handles this by letting an agent explore th domain using an action policy, and updating the policy based on a goal-specific penalty or rewar which may be obtained after taking several actions. This exploration can be extremely expensive and therefore it is common to precede reinforcement learning with imitation learning to start th agent off with a reasonable policy. This strategy is used in (Mnih et al.] 2015) where an agent i trained to play Atari games, and in (Silver et al.]2016) for mastering the game of GO.\nAutoencoders (Rumenlhart et al.] 1986) have been used in semi-supervised classification to pre. train a network on an auxiliary task, such as denoising, to prevent overfitting on a small numbe of labeled data (Baldi]2012). Recent work in this area (Rasmus et al.]2015) proposes to train or. the primary and auxiliary task concurrently and using lateral connections (Valpola]2015) betweer. encoding and decoding layers to allow higher layers of the network to focus on high level features...\nOur framework takes inspiration from each of the works described here\n1) We propose a framework that simultaneously models the sensory-motor relationship of an agent and classifies its actions, and can be trained with partially labeled sequences. 2) We show that motion prediction is a good auxiliary task for action classification, especially when training labels are scarce. 3) We show that simulated motion trajectories resemble trajectories from the data domain and can be manipulated by activating discriminative cell units. 4) We show that the network learns to represent high level information, such as gender or identity, at higher levels of the network and low level information, such as velocity, at lower levels. 5) We test our framework on the spontaneous and sporadic behavior of fruit flies, and the intentional and structured behavior of handwriting."}, {"section_index": "2", "section_name": "3 MODEL", "section_text": "Our model is a recurrent neural network, with long short term memory, that simultaneously clas. sifies actions and predicts future motion of agents (insects, animals, and humans). Rather tha. actions being a function of the recurrent state, as is common practice, our model embeds action. in recurrent state units. This way the recurrent function encodes action transition probabilities an. motion prediction is a direct function of actions, similar to an HMM. The network takes as input a. agent's motion and sensory input at every time step, and outputs the agent's next move according t a policy, which is effectively learnt via imitation learning. Similar to autoencoders, our model has. discriminative path, used to embed high level information, and a generative path used to reconstruc. the input domain, in our case filling in the future motion. Each discriminative recurrent cell is full. connected with its corresponding generative cell, allowing higher level states to represent highe. level information, similar to the idea of Ladder networks (Valpola2015).\ngenerative discriminative i+ h'i h'i-1 h'i h'i+1 h'i+1 h'i-1 h'i Vi-1 Vi Vi+1 [xi,Vi] Xi-1 Xi Xi+1"}, {"section_index": "3", "section_name": "3.1 ARCHITECTURE", "section_text": "The flow of information through the network and the cost associated with its classification an prediction is expressed by the following equations:\nFigure 1: Left: A 3D depiction of our network unrolled for 3 timesteps. The highlighted cells show the path. from an input through a classification cell to a motion prediction output. During training, motion prediction. loss Lx is computed at every timestep, and classification loss Ly is computed only at frames for which labels. are provided. The diagonal connections between discriminative and generative cells enable higher levels of. the network to represent high level information. Vector v represents agent's sensory input, x its motion, h its internal state, and y labeled actions. Right: A zoom in on the blue and green cells showing the recurrent state (horizontal arrows) and inputs to the recurrent cell function f. Merging of arrows represents vector. concatenation, and branching vector duplication..\nThe model can be thought of as two parallel recurrent networks: The discriminative network takes. as input an agent's motion, x, and environmental sensory input, v, and propagates them up through. its hidden states which encode high level information, including action labels, y. The generative. network decodes the states of the discriminative network, propagating information down to predict. the agent's motion at the next time step, x. The two networks have the same number of layers and. are connected diagonally at each layer such that the information encoded in the hidden units of the. discriminative network is propagated to the corresponding layer of the generative network at the next. time step. Intuitively, these can be thought of as skip connections or \"shortcuts\"' which let low level. motion information propagate directly through lower levels of the network, leaving higher levels of. the network free to represent high level phenomena, such as goals or individual characteristics. Our. experiments confirm this intuition. The model can be trained without any action labels, in which. case the hidden state may be used to discover high level information about the data. or with action labels for a subset of the data, in which case each action is assigned to a hidden state unit and will. thus contribute to subsequent motion prediction and action classification..\nh+1=f(h,h) h=f([xi,Vi],h-1) h=f(h-1,h-1) hi+1 = f([hi+1,h;],ht) Lx(Xi+1,Xi+1 y=(h(1:N)+1)/2 xi+1 = g(hi+1) C =XCy +(1-X)Cx\nwhere f is a recurrent cell function, g is a transformation, Ly computes classification loss (on frame for which labels are provided) and Lx computes motion prediction loss. The total cost, C, combine the misclassification cost, Cu, and misprediction cost, Cx, using X to trade off the two. N is the number of labeled action classes, L the number of levels, T the number of frames, l is the laye index and i the frame index. The first N units of state h- are forced to be classification units, they are scaled from [-1 1] to [0 1] (assuming f's activation function is tanh) and assigned to y.\nThe model is presented as part of a general framework where f, g, and number of levels/units are. architectural choices to be optimized for each dataset. For our experiments we found that 2-3 levels of recurrent cells with 100-200 units worked well, with f as a gated recurrent unit (GRU) cell (Cho. et al.]2014) and g as linear transformation. The choice of loss functions depends on the target. type; sigmoid cross entropy for multitask classification (where actions can co-occur), softmax cross entropy for multiclass classification (where actions are mutually exclusive), and sum of squared differences for regression (where outputs are real valued). The optimal value for A depends both. on the output domain of Ly and Lx and whether the primary goal is classification or simulation.. Data-specific model- and training parameters are described in Section|5|and further training details are discussed in supplementary material"}, {"section_index": "4", "section_name": "3.2 MULTIMODAL PREDICTION", "section_text": "Evidence suggests that animal behavior is nondeterministic (Roberts et al.]2016); thus, motion. prediction may be better represented as a probability distribution than a function. When future mo-. tion is multimodal, the best regression model will pick the average motion of the different modes. which may not lie within any of the actual modes (visualized in supplementary material). This ob-. servation has been made by others in the context of modeling real-valued sequences with RNNs.. [Graves]2013) model the output of an RNN as a Gaussian mixture model and (Chung et al.]2015] additionally model the hidden recurrent states as random variables. We take a nonparametric ap-. proach, making no assumption about the shape of the distribution. We discretize motion into bins. and treat the task of predicting future motion as independent multiclass classification problems for each motion feature, which results in a probability distribution over all bins for each dimension . More concretely, each dimension of x is assigned n bins and the target for x+1 becomes the binned. version of x+1, denoted as x+1, which has exactly one nonzero entry for each dimension of x. The. prediction x+1 then becomes a discrete distribution over the bins for each feature dimension and. the motion prediction loss becomes Lx(xi+1, i+1) = q(crossentropy(i+1, i+1)), as opposed to the Euclidean distance in the case when x is a real valued vector. The number of bins determines. the granularity of the motor control; a greater number of bins means more precise motion control. but is also more expensive to train.."}, {"section_index": "5", "section_name": "3.3 SIMULATION", "section_text": "Given a model that can predict an agent's future motion from its current state, a virtual agent car be simulated by iteratively feeding predicted motion x+1 as input xi+1 to the network. We pick. a bin by sampling from the distribution given by xi+1 and assign a real value to xi+1 by sampling. uniformly from the selected bin. An agent's perception of the environment depends on the agent's. location, and therefore sensory features vi+1 must be updated for each forward simulation step tc. correspond to the agent's perspective at time i + 1. When simulating multiple agents that interac. with one another, each agent is moved according to its xi+1 and then Vi+1 is computed for each. agent based on the new configuration of all agents.."}, {"section_index": "6", "section_name": "4 DATA", "section_text": "Our framework is agent centric, it models the behavior of each agent individually based on how. it moves and senses its surroundings, including other agents. It is applicable to any data that can. be represented in terms of motor control (e.g. joystick controller) and sensory input that captures. context from the environment (e.g. 1st person camera). We test our model on two types of data, fly. behavior and online handwriting. Both can be thought of as a type of behavior represented in the. form of trajectories, but the two are complementary. First, flies behave spontaneously, performing. actions of interest sporadically and in response to their environment, while handwritten text is in-. tensional and highly structured. Second, handwriting varies significantly between different writers. in terms of size, speed, slant, and proportions, while inter-fly variation is relatively small. We use 4. datasets for our experiments (listed below) with the aim to answer the following questions: 1) does. motion prediction improve action classification, 2) can the model generate realistic simulations (does. it learn the sensory-motor control), and 3) can the model discover novel behavioral phenomena?\nFly-vs-fly (Eyjolfsdottir et al.]2014) contains pairs of fruit flies engaging in 10 labeled courtship and aggressive behaviors. We include this dataset in our experiments to see how our model compares with our previous action detection work which relies on handcrafted window features\nFlyBowl is a video of 10 male and 10 female fruit flies interacting and is labeled with male wing extensions which is part of their courtship behavior. With this dataset we were particularly interested in whether our model could simulate a virtual fly in a complex, dynamic environment.\nSynthFly is a synthetic dataset containing a single fly moving inside of a rectangular chamber with a stationary object located in the center. The fly is synthesized to move according to the control laws listed in Figure [2 The purpose of this dataset is to test whether our model could learn generative control rules, particularly ones that enforce non-deterministic behavior (see laws 4 and 5).\nIAM-OnDB (Liwicki & Bunke2005) contains handwritten text from 195 different writers, acquired using a smart whiteboard that records a list of (x, y) coordinates for each pen stroke. The data is weakly labeled, with each sequence separated into short lines of transcribed text. For consistency. with our framework we hand annotated strokes of 10 writers, marking the start and end of the 26. lower case characters, which we use along with data from 35 unlabeled writers for our experiments\nAll data, along with details about training and test splits, will be available in supplementary material\nFly-vs-Fly FlyBowl IAM-OnDB Im viniues in a toste of honey Yn Richorason's Skillul dinection The film version sf Ain Shelagh ataste F hony Delaneus play opens at lhe leicrster Squart. Theetns. tomarou.This n not a SynthFly - control laws: total # # agents # labeled total # # trials % frames frames per trial actions instances 1) walk forward with random noise Fly-vs-Fly 3.7M 47 2 10 8599 10 2) at wall, rotate in direction of least resistance 3) when object in front visual field, walk towards FlyBowl 0.6M 1 20 1 961 5 4) extend either left or right wing (random) SynthFly 0.4M 4 1 0 0 0 5) at object, rotate left or right (alternating) IAM-OnDB* 1.5M 45 1 26 12049 88 6) repeat 1)\nFigure 2: Snapshots from the three labeled datsets used for our evaluation and a list of control laws used tc generate synthetic fly trajectories. The table summarizes the statistics of each experimental dataset, where tota # frames sums over all trials (videos / text documents) within an experiment and agents within a trial, total # instances sums over all action classes, and % frames is the percent of frames in labeled sequences containin? actions of interest. IAM-OnDB* is a subset of IAM-OnDB with additional annotations for 10 of its trials.\nilm uintues in a toste of honey. Yn Richarason's Skillul dinedtior The film verson ol Ain Shelagh Delaney',s play a taste f honey. Thealns. tomnrou.This n nt a\nflies walls fwd x: yaw Z = side len dy 4 dx left_winglen left_wing_ang\nFigure 3: Left: Sensory input v for fruit flies represents how a fly sees other flies and chamber walls, their motor. control x lets them move their body along 8 dimensions (incl. right_wing_ang/len). Right: Motor control x for handwriting is represented as vector (dx, dy) along with binary stroke visibility z (pen on/off whiteboard).\nFly representation: Motor control features, x, describe the locomotion of a fly. The flies are tracked from video using FlyTrackef|and from the tracked fly poses we extract motion features represented in the fly's frame of reference. The 8 motion features, displayed on top of the close-up fly' lin Figure 3] are designed such that they can animate virtual fly agents. Sensory input features, v, are inspirec by a fly's compound eye which consist of 750 compactly aligned ommatidia. Approximating its vision as a one dimensional 360 view, we place 72 5o circular sectors around a fly agent, aligned with its orientation, and project flies that overlap with a sector onto its artificial retina with intensity inversely proportional to their distance to the agent. Thus, flies close to the agent yield high intensity in several pixels and flies that are far away take up few pixels with low intensity (compare scene in Figure3|with v sensed by the agent). We represent chamber walls similarly, projecting them onto a separate channel decreasing intensity exponentially with distance to the agent. This representation is invariant of the shape of the chamber and the number of flies present in the chamber.\nIn order to compare our model with methods presented in Eyjolfsdottir et al.(2014), independentl of feature representation, we use the 36 features provided with the Fly-vs-Fly dataset. We assign th first 8 dimensions (describing fly's motion) as motor control x, and the remaining features (describ ing fly's position relative to the other fly, and feature derivatives) as sensory input v.\nHandwriting representation: We represent the motor control, x, as (dx, dy, z) where dx and dy are the x and y displacements from the previous pen recording and z is a binary variable denoting segment visibility. We normalize dx and dy for each writer, providing invariance to writing speed. but character size (number of points per character), slant, and other variations are not explicitly accounted for. As handwriting is not influenced by a changing environment, but rather a function of the internal state and current motion of the writer, we leave the sensory input, v, empty.\nWe evaluate our framework on three objectives: classification, simulation, and discovery. For cla sification we show the benefit of motion prediction as an auxiliary task, compare our performanc on Fly-vs-Fly with previous work, and analyze the performance on IAM-OnDB. We qualitativel show that simulation results for fly behavior and handwriting look convincing, and that the model i able to learn control laws used to generate the SynthFly dataset. For discovery we show that hidde states of the model, trained only to predict motion (without any action labels), cleanly capture hig level phenomena that affect behavior, such as fly gender and writer identity.\nwww.vision.caltech.edu/Tools/FlyTracker 'Original photograph from gompel. org/drosophilidae\nModel details: We trained a separate model for each dataset, using a sequence length of 50, a batch size of 20, and 51 bins per dimension for motion prediction. For fly behavior data we used 2 levels of GRU cells (4 cells total) of 100 units each, and for handwriting we used 3 levels of GRU cells (6 cells total) of 200 units each. Parameters were determined using a rough parameter sweep on a subset of the training data. Further training details are described in supplementary material! Our model is implemented in Tensorflow (Abadi et al.|2015).\na) benefit of auxiliary task. b) Fly-vs-Fly comparison with prior work F1 frame F1 bout F* Flybowl BESNet 0.9 Fly-vs-Fly hand crafting + BENet 0.760 0.770 0.765 windowSVM + HMM IAM-OnDB 0.8 BENet 0.717 0.665 0.690 0.7 BENet + filter 0.716 0.739 0.727 BESNet 0.734 0.627 0.677 0.6 frmme BESNet + filter 0.752 0.724 0.738 0.5 c) IAM-OnDB example 0.4 x: have approved gny Such 0.3 0.2 y: 0.1 u c h olic 0 3 6 12 25 50 100 % training labels. time\nFigure 4: a) Performance of model trained with (solid, BESNet) and without (dashed, BENet) motion pre. diction, showing that BESNet requires significantly fewer labels to match the performance of BENet. b) Our. model reaches performance competitive with Eyjolfsdottir et al.(2014), without handcrafting or context from future frames. c) Input x, label y, and classification score y, colored according to character label, showing high. confusion at the beginning of characters, partly explaining the lower F1-frame performance on IAM-OnDB.."}, {"section_index": "7", "section_name": "5.1 CLASSIFICATION", "section_text": "Action labeling involves recording the start frame, end frame, and class label, of each action interval. which we refer to as a bout. From a sequence of frame-wise classifications, consecutive frame. of the same class prediction are consolidated into a single bout. To measure both duration anc. counting accuracy we use the performance measures described in|Eyjolfsdottir et al.(2014), namely. the F1 score (harmonic mean of precision and recall), on a per-frame and per-bout level. Bout-wise. precision and recall is computed by assigning predicted bouts to ground truth bouts one-to-one.. maximizing intersection over overlap. F* is the harmonic mean of the F1-frame and F1-bout scores.\nOur goal for classification is to reduce the number of training labels without loss in performance To measure the benefit of motion prediction as an auxiliary task we compare our model, which we will refer to as Behavior Embedding Sensory-motor Network (BESNet), with our model without its generative part (similar to a standard RNN but with action labels embedded in hidden states, shown in Figure|5), referred to as Behavior Embedding Network (BENet). We trained both models on each dataset using 3-100% of available labels. As BESNet is trained to predict future motion it makes use of unlabeled sequences during training whereas BENet does not. Figure4|a) shows the frame-wise F1 score for each of the 36 trained models (3 datasets, 6 label fractions, 2 model types), averaged over all action classes in a dataset. This experiment shows that motion prediction as an auxiliary task significantly improves classification performance, especially when labels are scarce.\nIn Figure4b) we compare the performance of our network with the best performing method on Fly. vs-Fly, a window based support vector machine (SVM) that uses hand crafted window features an. fits an HMM to the output for smoother classification - outperforming sophisticated methods sucl as structured SVM. For this comparison we used the features published with the dataset as describec. in Section4 Although recurrent networks implicitly enable smooth classification, different actions. require different levels of smoothness. To avoid over segmentation of action intervals, we smootl the output of our network by applying a flat filter, of size equal to 10% of the mean duration of eacl. class. Our results show that filtering significantly improves the bout-wise performance and that ou. performance on the Fly-vs-Fly test set is comparable with that ofEyjolfsdottir et al.(2014), using. no handcrafting and no context of future frames (apart from smoothing)..\nWe applied the same type of filtering to the classification output of IAM-OnDB as we did for Fly. vs-Fly and obtained an F1-(frame, bout) of (0.445, 0.585) averaged over all classes, and (0.567.. 0.690) averaged over all instances (weighted average of classes). Figure 4[c) demonstrates that at the beginning of some characters there tends to be more confusion in y than towards the end, which is unsurprising as the beginning of these characters looks approximately the same.."}, {"section_index": "8", "section_name": "5.2 MOTION PREDICTION", "section_text": "Before we look at simulation results, we quantitatively measure the accuracy of one-step predic tions. We compute the log-likelihood of FlyBowl test sequences under the motion prediction model 1O C of motion dimension d, and xd.. is a probability distribution over the bins predicted by the model..\nWe compare our model with the following motion prediction policies: 1) uniform distribution over. bins, 2) distribution over bins computed from training set, 3) constant motion policy that copies pre-. vious indicator vector as motion prediction, and 4) a smooth version of 3) filtered using an optimized Gaussian kernel. The results, shown in Figure[5] demonstrate that the recurrent models learn a sig. nificantly better policy. In addition, we compare variants of our model and a standard RNN within. our framework (with the same sensory-motor representation, multimodal output, and GRU cells). which shows that recurrence is essential for good motion prediction and that diagonal connections provide a slight performance gain. In Section|5.4 we show the main benefit of the diagonals..\n-loglik(x) BENet BESNet BESNet BESNet RNN uniform distribution. 119212 no diagonals no recurrence training distribution. 75312 last motion. 104334 last motion smoothed. 72256 RNN 57917 BESNet no recurrence 74524 BESNet no diagonals 57903 BESNet 57798\nBENet BESNet BESNet BESNet RNN uniform distribution 11 no diagonals no recurrence training distribution 7 last motion 10 last motion smoothed 7 RNN 5 BESNet no recurrence 7 BESNet no diagonals 5 BESNet 5\nFigure 5: a) Network variants used in experiments (compare BESNet to highlighted cells in its unrolled visual ization in Figure1). b) 1-step motion prediction performance on FlyBowl testset, see text above for explanation"}, {"section_index": "9", "section_name": "5.3 SIMULATION", "section_text": "One-step prediction performance does not clearly reveal whether a model has learnt the generative. process underlying the training data. In order to get a better notion of that we look at simulations. produced by the learnt models, which can be thought of as very long term predictions. As motion. prediction is probabilistic, comparing long term predictions with ground truth becomes difficult as. the domain of probable positions becomes exponentially large. Qualitative inspection, however. gives a good intuition about whether the simulated agent has learnt reasonable control laws..\nWhile the underlying generative process for the motion of real flies is unknown, simulations from the model trained to imitate them suggest that the model has learnt a reasonable policy. During simulation we place no physical constraints on how the flies can move but our results show that simulated FlyBowl agents avoid collisions with the chamber walls and with other flies, and that agents are attracted to other flies and occasionally engage in courtship-like behavior. This is shown in Figure6|and better visualized as video in supplementary material\nSimulated handwriting is easier to visualize in an image and we are used to recognizing the structure it should produce. Figure7|shows that the model trained on IAM-OnDB produces character-like trajectories in word-like combinations. Note that handwriting is generated one (dx, dy, z) vector at a time, and each character is composed of roughly 20 such points on average. On the right hand side of Figure7we show that we can increase the generation of specific characters by activating their classification units (forcing their values to 1 and others to O) during simulation.\nFigure 8 shows the output of two recurrent units of the SynthFly model that indicate that the mode] was able to learn control rules that were designed to ensure a multimodal motion prediction target One unit fires in correlation with either left or right wing extension, and the other toggles between a negative and positive state as the agent turns left or right to avoid the object. In supplementary material we show a video of this simulation and compare it to a simulation from the model trained with deterministic motion prediction. This comparison clearly demonstrates the benefit of treating motion prediction as a distribution over bins, as the deterministic agent quickly becomes degenerate\nFigure 6: a) 10 x 20-frame lookaheads (simulations) for each test fly from its current location, demonstrating the non-deterministic nature of the motion prediction. The ground truth 20-frame future trajectory is outlinec in black for comparison. b) shows trajectories of 20 flies simulated for 1000 frames, and c) shows 1000-frame trajectories for 20 real flies interacting. The simulation shows that the model has learnt a preference for staying near the boundary and to avoid walking through the boundary.\nFigure 7: Left: Text generated by our model, one vector at a time (approximately 20 vectors per character) Right: Text generated by the same model while \"activating\" character classification units of the model during simulation, shown in two lines per character.\nFigure 8: Comparison between synthetic fly (ground truth) and simulation by our model. The wing angles. distance to object, and left/right turn show the agent's motion over time, and the two hidden units indicate that the model has learnt to represent control laws 4 and 5 used to generate the synthetic trajectories.."}, {"section_index": "10", "section_name": "5.4 DISCOVERY", "section_text": "We motivated the structure of our network, specifically the diagonal connections between discrim. inative and generative cells, with the intuition that it would allow higher levels of the network tc. better represent high level phenomena. To verify this we train models to only predict future motion. with no classification target, and visualize what the hidden states capture. We apply the model tc. [x, v], obtaining hidden state vectors h' and h', l E {1, ..., L}, and prediction x, map the data points. (time steps of each fly/writer) from each state to 2 dimensions using t-distributed stochastic neighbo embedding (tSNE,Maaten & Hinton(2008)), and plot them in colors based on known phenomena.\na) 10 x 20-frame lookaheads for test flies b) simulated trajectories c) real trajectories 8q 8 q\nPi5u-urern y yJ Soxtfen keeron Roseeneee/eeWee ge5beesra s C \"e\" 4yunfy cJ`nou0 fe o Jm eJO e T CPouwu uffe y ewmny ufuerncY 4 R $ SomS6* S tO menmd7$em! OS S s6S < st acur df t nr O-They +hndln b Muvyc\"nnnerTwwrre u r$yq y nn|mf nAo e^dSm C9q Cy cu) ffrewyyj 7 M oReo c hcynem -mqm~nu ruuM ny\nsynthetic fly simulation h1 - unit 91 h2 - unit 75 I/r wing angle dist. to object l/r avoidance time time\nsynthetic fly simulation h - unit 9 h2 - unit 75 I/r wing angle 10C dist. to object I/r avoidance time time\nfemale O r wing ext. stroke length character writer identity O male O I wing ext.\nFigure 9: Left: Hidden state values of a 3 level model trained without any labels on IAM-OnDB, reduced to 2 dimensions using tSNE mapping. The network discovers writer identity at the highest level, while lower level phenomena such as stroke length are represented at lower levels. Right: tSNE mapped input, output, and hidden state values of FlyBowl model (trained without any labels), colored by gender and male wing extension.\nIn Figure 9|we plot the data points of a 3 level (L=3) model trained on IAM-OnDB in this low dimensional embedding, and color code them according to three criteria: stroke length, character class, and writer identity. The results show that stroke length is well clustered at low levels but not at high levels, characters are best clustered at mid to top discriminative levels, and writer identity is extremely well clustered at the top generative level but not at low levels. We ran the same ex- periment for the model trained without diagonal connections (which without a classification target is effectively a standard RNN with 6 levels of GRU cells), which did not learn to represent writer identity in any of its hidden states. Intuitively this is because the network has to carry low level information through every state to predict low level information at the other end, whereas BESNet carries it directly through the low level diagonal connections leaving higher hidden states free to capture high level information. A visualization comparing both models is shown in supplementary material along with a quantitative measurement of our observation.\nWe have proposed a framework for modeling the behavior of animals, that simultaneously classifie. their actions and predicts their motion. We showed empirically that motion prediction (a target tha. requires no labeling) is a good auxiliary task for training action classifiers, especially when label are scarce. We also showed that the generative task can be used to simulate trajectories that loo natural to the human eye, and that activating classification units increases the frequency of th: action in the simulation. Finally, we showed that our model lends itself well to discovery of hig. level information from the data, by visualizing what is captured in its hidden states..\nWe tested the framework on two types of data, fly behavior and online handwriting, and we anticipate that it will scale to more complex data with appropriate tuning of hyperparameters and abstraction of visual input. For example, application to human motion capture with 1st person video as sensory input might require greater model complexity to account for the higher dimensional motor control and pre-processing of the sensory input, e.g. with a convolutional neural network, to extract a higher level sensory representation before feeding it to the dynamical system.\nMoving forward, we are interested in working on hierarchical label embedding in the states, assign ing higher order activities to units higher in the network. Along those lines, a discrete recurrent network could be trained separately on the wealth of available text, and be placed on top of a real- valued handwriting network. We also aim to explore how this framework can be used to understand the neural mechanisms underlying the generation of behavior in flies.\nThe same visualization for the model trained on FlyBowl, where data points are color coded by gender and left/right wing extension, shows (Figure 9] right) that gender is very mixed in the input and output states but well separated in the top generative state, while lower level information such as wing extension is well represented at lower levels of the network\nWe would like to thank David J. Anderson and Charless Fowlkes for insightful discussions, anc acknowledge Google and The Simons Foundation for their financial support."}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "David J Anderson and Pietro Perona. Toward a science of computational ethology. Neuron, 84(1) 18-31, 2014.\nValentino Braitenberg. Vehicles Experiments in Synthetic Psychology. MIT Press, 1984\nJunyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Ben gio. A recurrent latent variable model for sequential data. In Advances in neural informatior. processing systems, pp. 2962-2970, 2015.\nAlan Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recur rent neural networks. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE Interna. tional Conference on, pp. 6645-6649. IEEE, 2013.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8) 1735-1780. 1997.\nMartin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vin- cent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Watten- berg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URLhttp: / /tensorf1ow. org/ Software available from tensorflow.org.\nAnthony I Dell, John A Bender, Kristin Branson, Iain D Couzin, Gonzalo G de Polavieja, Lucas PJJ Noldus, Alfonso Perez-Escudero, Pietro Perona, Andrew D Straw, Martin Wikelski, et al. Auto- mated image-based tracking and its application in ecology. Trends in ecology & evolution, 29(7): 417-428, 2014.\nHueihan Jhuang, Estibaliz Garrote, Xinlin Yu, Vinita Khilnani, Tomaso Poggio, Andrew D Steele and Thomas Serre. Automated home-cage behavioural phenotyping of mice. Nature communica- tions, 1:68, 2010.\nMayank Kabra, Alice A Robie, Marta Rivera-Alba, Steven Branson, and Kristin Branson. Jaaba interactive machine learning for automatic annotation of animal behavior. nature methods, 10(1): 64-67, 2013.\nLaurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research. 9(Nov):2579-2605. 2008\nVolodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Belle mare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529-533, 2015.\nI Moore. Some thoughts on the relation between behavior analysis and behavioral neuroscience The Psychological Record, 52(3):261. 2002.\nAntti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. . Semi supervised learning with ladder networks. In Advances in Neural Information Processing Systems. pp. 3532-3540, 2015.\nHerbert A Simon. The sciences of the artificial. MIT press, 1996\nKathleen K Siwicki and Edward A Kravitz. Fruitless, doublesex and the genetics of social behavio. in drosophila melanogaster. Current opinion in neurobiology, 19(2):200-206, 2009\nAlexander B Wiltschko, Matthew J Johnson, Giuliano Iurilli, Ralph E Peterson, Jesse M Katon. Stan L Pashkovski, Victoria E Abraira, Ryan P Adams, and Sandeep Robert Datta. Mapping sub-second structure in mouse behavior. Neuron, 88(6):1121-1135, 2015.\nDavid Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484-489, 2016"}] |
r1xUYDYgg | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Recently, machine learning, which uses big data derived from user activity on websites, images anc. videos is increasingly getting attention. Deep learning is at the center of that attention. Conven. tional machine learning techniques have required hand-crafted features specialized to a particulai domain such as image or voice. In contrast, deep learning has a hugely important benefit that can. illustrate data flow from raw data to an objective value in a single neural network and can train thor. oughly using those data. In the computer vision domain, a team of Hinton (Krizhevsky et al., 2012 achieved outstanding classification accuracy using deep learning in an object classification competi tion ILSVRC2012 (Russakovsky et al., 2015). In the subsequent years' competitions, deep-learning. based methods evolved continually and exhibited superior performance (Simonyan & Zisserman. 2014a; Szegedy et al., 2014; He et al., 2016). Convolutional neural networks (CNNs) trained for. ILSVRC object classification are helpful for improving classification accuracy for scene recognition. and video recognition by functioning as a feature extractor or being fine-tuned (Zhou et al., 2014;. Simonyan & Zisserman, 2014b). Moreover, application is beginning to emerge in other areas such. as medical imaging (Tajbakhsh et al., 2016). Software platforms for deep learning are expected to. play an important role in accelerating a wide range of research efforts and applications..\nAlthough deep learning achieved significant recognition accuracy that cannot be achieved usin onventional methods, the number of parameters that can be trained is greater, resulting in request or huge amounts of training data. This shortcoming not only increases data collection costs bu also increases computational costs of training larger parameters with larger data. Moreover, trial and-error must be undertaken to ascertain a good neural network structure; thereby higher cost oecome necessary. What resolved this computational cost difficulty and enabled deep learning t work on a practical scale problem is general purpose computing on GPU (GPGPU) technology which offers rapid matrix calculation. However, a deep learning framework must be set up o a dedicated computer. If a user wants to train a huge network, then a cluster computing syster hat uses MPI or Hadoop must be used for collaboration of multiple computers to obtain large working memory and computational speed. To set up and maintain these systems generally present"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "an expensive task. For that reason, such systems are available only to expert IT companies or laboratories.\nThis work specifically examines JavaScript, the programming language that runs on web browsers installed on ordinary personal computers and smartphones. With the recent advancement of web technology, JavaScript became the standard programming language to implement rich applications on web browsers. Word processors provided by Google and Microsoft are the popular examples Those applications are traditionally implemented as native applications. This is not only a change of programming language; it brings an advantage of install-free convenience. Moreover, the com munication features of web browsers are used not only during the loading of the application, bu are also used by the application on demand, using so-called Ajax technology. For example, using this technology with a Google service spreadsheet, modifications made by one user are shown ir real time on other users' displays. By making full use of this technology, collaboration of an ap plication running on web browsers across the internet becomes possible. Moreover, web browser. such as Google Chrome run not only on Windows, but also on Mac OS X, Linux, Android, and iOS smartphones. They provide a compatible JavaScript executing environment. More recently, a small microcontroller board for prototyping Internet of Things (IoT) devices runs Linux. JavaScript car run on these devices. However, JavaScript is rarely used for scientific computation. This is mainly because JavaScript assumes single-threaded execution. It has no fast matrix computation library which is crucially important for scientific computation. To resolve this difficulty, our previous work proposed the fast matrix computation library, which uses a parallel computing platform, WebCL from JavaScript (Miura et al., 2015). In WebCL, GPGPU can be utilized from JavaScript code Moreover, its application to deep learning is proposed (Miura & Harada, 2015). However, existing implementations cannot fully exploit the functionality of JavaScript and WebCL. For that reason only a small six-layer CNN for classifying CIFAR-10 (Krizhevsky, 2009) dataset can be trained. Ir this work, our objective is to provide a deep learning platform that can train practical large-scale CNN as large as VGGNet. In the Experiment section, we present preliminary results on training VGGNet by distributed computation using web browsers as the computation client. In the following section, we restrict our description to CNN only, but our system is applicable to neural networks o1 other kinds by implementing the layers that they need.\nOur contributions are the following"}, {"section_index": "2", "section_name": "2 RELATED WORK", "section_text": "In this section, we first describe the studies related to distributed computing using generic comput. ers that are not designed for scientific computing. The SETI@home project searches for extrater. restrial life (Anderson et al., 2002). In that research effort, radio waves analyses were performec. distributedly on computers of volunteers. Although dedicated software had to be installed, mor than 3 million computers participated in the project and contributed vast amounts of computationa. resources. Merelo-Guervos et al. (2008); Klein & Spector (2007) distributedly computed geneti algorithm (GA) using web browsers as computing nodes. The main component of GA was calcula. tion of the fitness of population, which could be computed completely in parallel, thereby achieving. extremely effective distributed computing. In our work, the main task to be distributed is deep learn ing, for which a large amount of weight parameters must be communicated frequently. Therefore. the communication efficiency becomes important.\nSecondly, we explain distributed computing of deep learning. Dean et al. (2012) proposed a mech- anism called DistBelief, which divides a neural network into multiple blocks of neurons and trains each block in a different computer. Large amounts of data are transferred at the division borders\nDownload code from https: //github. com/mi1-tokyo\nWe implemented the fastest matrix library and deep learning library that can run on web browsers using GPGPU. The source code is provided as open-source software1 Even where GPGPU cannot be used, native JavaScript implementation is provided, which allows high-level multi-dimensional matrix operation. We describe the possibility of training large scale CNN in a distributed manner without installing software in computation nodes, except for a generic plugin.\nThey require n-to-n communication, which is unsuitable for environment in which computing nodes. are not in the same LAN. deeplearning4j - provides distributed computing of deep learning frame. work that runs on the distributed computing Hadoop. However, Hadoop must be installed in all. computing nodes, thereby imposing high deployment and maintenance costs. Meeds et al. (2014). developed a distributed deep learning system using web browsers. However, it is implemented in. native JavaScript. For that reason, training with a large-scale dataset is nearly impossible because of the computational speed. In this work, we inherit the good properties of a JavaScript (web browser based computing environment, with the aim of making training of practical CNN possible.."}, {"section_index": "3", "section_name": "3 MATRIX LIBRARY IMPLEMENTATION", "section_text": "Though Sushi achieved efficient calculation on GPGPU, currently it lacks the availability for large scale neural networks that require matrices of large dimensions. Sushi2 is developed to overcome such problems that Sushi has been facing and achieved the following benefits:\nUse simple and efficient data structures to achieve good performance. Allow users to understand how to use it easily. Support CPU (native JavaScript) and GPGPU matrix without burdening ordinary user. learning WebCL programming.\nMost general purpose matrix libraries for JavaScript represent a multi-dimensional matrix with a nested JavaScript array. In contrast, Sushi2 represents a matrix with TypedArray, which is used foi transferring numeric data between the CPU and GPGPU. TypedArray is a one-dimensional numeric array with fixed size and bit width at construction, as in arrays of C language. The array accommo dates efficient storing and manipulation of large data. TypedArray which stores 32-bit floating poin numbers is named Float32Array and the one that stores 8-bit unsigned integer is named Uint8Array The numeric type of JavaScript is a 64-bit floating point number, but some WebCL environments dc not support it. Therefore, the basic numeric type of matrix is a 32-bit floating point number. How ever, the precision of a 32-bit floating number is only 23-bit, so it cannot be used as an index of a large matrix (which have more than 223 elements). This is a problem for functions such as argmax so a 32-bit signed integer matrix is also implemented. Moreover, an 8-bit unsigned integer matrix for raw image data and a logical matrix for Boolean operations are implemented.\nFunctions for the operating matrix are designed to be similar to those of MATLAB, which allows new users to use Sushi2 quickly. Operations for matrices that have more than two dimensions are implemented. It is a simple matter to operate color images and sets of color images (four- dimensional matrix). Almost all patterns for indexing operation in MATLAB are implemented. For import or export of a matrix, efficient binary format of numpy' is implemented as well as the native JavaScript nested Array.\nIn this section, we describe the fast and generic matrix library \"Sushi2\", which is based on previous. library \"Sushi.' They are using WebCL technology, which is a parallel computing platform to be. used from JavaScript. WebCL is a JavaScript wrapper for parallel computing platform OpenCL,. standardized by Khronos Group, which provides a unified interface to multi-core CPU and GPGPU. In contrast to NVIDIA CUDA. GPUs from AMD and Intel can also be used as accelerators. Un fortunately, WebCL is not built-in feature of web browsers, but there is an add-on for Firefox and.. WebCL-integrated Chromium. Our library also works with node.js (server-side JavaScript execu-. tion environment), in which node-opencl' library can be used to accelerate computation. Although. Sushi2 performs best in a WebCL environment, most functions have equivalent native JavaScript. implementation. Sushi2 currently uses WebCL for the acceleration of numerical calculation, but it is possible to use other solutions including WebGL or asm.js by substituting implementation of ma-. trix manipulation. In WebCL, \"kernel'' is the function to run on GPGPU. Kernel, which is written in. C language, must be compiled before use. Sushi2 wraps them to allow users to write simple codes. Details of low-level WebCL operations are available in the literature (Miura et al., 2015)..\nFunction $M.gpuArray transfers a matrix to GPGPU. In functions that support WebCL, operations of matrices in GPGPU are accelerated. In JavaScript, unused memory is released by garbage collection but this is not applied for memory allocated on the GPGPU by WebCL. It has to be released by explicitly calling the destruct method. To make programming convenient, an \"autodestruct\"' helper function is supplied. When the closure passed to autodestruct finishes, the matrices allocated in it are released automatically. Figure 1 presents a sample implementation of a fully-connected layer of CNN. Whether input matrices are on GPGPU or not, they can be processed in the same code.\n1 var top = $M.autodestruct(function O {// closure function 2 var product = $M.mtimes($M.t(weight), data);// weight' * data (No operator overloads in JavaScript) 3 var bias_repeated = $M.repmat(bias, 1, $M.size(data, 2));//$M.size(data, 2) is the number of samples 4 var product_with_bias = $M.plus(product, bias_repeated);// product + bias_repeated 5 return product_with_bias; 6 });// allocated matrices other than product_with_bias (e.g. $M.t(weight), product, bias_repeated) ar released here\nFigure 1: Example of forward calculation of fully-connected layer using Sushi2\nMost GPGPU kernels are implemented originally for Sushi2, but matrix multiplication kernel is ported from clBLAS's5 \"sgemm', because it requires advanced optimization..\nTable 1 presents a speed comparison between our library and existing JavaScript based matrix li. braries: Sylvestero and Math.js'. The hardware environment is on Table 2 (AMD). When GPGPt is used, the time includes data transfer between the CPU and GPGPU. Task 1 represents simple. element-wise task. Task 2 represents relatively expensive element-wise task. Task 3 and 4 are ma. trix multiplication task; the complexity of operations is greater than the number of elements. Ou. matrix representation (TypedArray) seems to be better than native JavaScript Array used in other li. braries, even without WebCL. We can see clear superiority of using GPGPU when the computationa. cost is high."}, {"section_index": "4", "section_name": "DEEP LEARNING LIBRARY IMPLEMENTATION", "section_text": "In this section, we describe deep learning library \"Sukiyaki2\", which is based on matrix library Sushi2.\nTable 1: Speed of Matrix Calculation. Time [ms] to process each task is shown\nFigure 2: Sample of a neural network and corre onding definition file.\n1 var imagedata = canvas_context.getImageData(0, 0, 28, 28);/ Sukiyaki2 Came pixel data from canvas suzuki:8000 C 1 2 var image = $M.typedarray2mat([4, 28, 28], 'uint8', new Digit recognition from camera. Uint8Array(imagedata.data));// convert to matrix with specifying channel, width, height (in fortran-order). 5 3 image = image.get(1, $M.colonQ, $M.colon();// extract sing color channel (image(1, :, :) in MATLAB) 5 4 image = $M.permute(image, [3, 2, 1]);// transpose to height width, channel Input: 5 net.forward({ 'data': image }, function Q {// forward >Predicted propagation Loading network 6 var pred = net.blobs_forward['pred'];// prediction layer oui idenuinit Load Network star Camera Resized image: S CaptureRecognizeAuto Rec 7 var max_index = $M.argmax(pred).I.getO;/ get matrix ind of highest score (1-origin). 8 var predicted number = max_index - 1; 9 document.getElementById('result').textContent = predicted_number.toStringO;// display classification result 10 net.release(;\nFigure 3: Screenshot of digit recognition web application using trained CNN, and main code o recognition. Recognition is performed on Android tablet, not on server.\nSukiyaki2 implements modules that are necessary for deep learning: layers, network structure man ager, and optimizers. Users can use a single layer separately, as well as training network by supply ing configuration file to the executable. Figure 2 portrays a sample of a network definition file. Fo network analysis required for distributed computing in the future, we used the architecture with sta ically defined relations of layers. Improvements from our previous work include: enabling networ graph branch (necessary for ResNet training), addition of some layers including dropout and batc normalization, efficient binary export of network parameters. Users can implement the original lay ers and optimizers to train new neural networks. It works automatically with CPU and GPGPl if it can be implemented by Sushi2's matrix operations. For cases in which a performance bottle neck exists, a dedicated GPGPU kernel can also be implemented. Using GPGPU for training i recommended, but almost all functions have native JavaScript fallback.\nFigure 3 portrays a sample application for recognizing digits captured using a camera. The network is trained using MNIST dataset (LeCun et al., 1998b). Although image data are given as a flat byte array, extensive functions of Sushi2 allow short implementation of image recognition only in 10 lines. Recent web browsers for smartphones follow the JavaScript standard, and it is possible tc develop such applications in this sample.\n1[{\"'type': 'blob_data\", \"name\": \"d_train\", \"inputs\": [\"batch'], \"outputs\": [\"data \"'label'], \"params\": {\"data_shape\": [28, 28, 1], \"file_prefix': \"mnist train. \"data_klass\": \"single\"}, \"phase\": [\"train']} 2 {\"type\": \"blob_data\", \"name\": \"d_test\", \"inputs': [\"batch''], \"outputs\": [\"'data' \"label''], \"params\": {\"data_shape\": [28, 28, 1], \"file_prefix\": '\"mnist test'. st) \"data_klass\": \"single\"'}, \"phase\": [\"'test\"']}, abel 3 {\"type\": \"convolution_2d\", \"name\": \"conv1\", \"inputs\": [\"data'], \"outputs\": ['. conv1\"], \"params\": {\"out_size\": 20, \"'stride\": 1, \"pad': 0, '\"in_size\": 1,. ksize\": 5}}, 4 {\"type\": \"pooling_2d\", \"name\": \"pool1\", \"inputs\": [\"conv1'], \"outputs\": [\". pool1\"], \"params\": {\"stride': 2, \"pad\": 0, \"type\": \"max\", \"ksize\": 2}},. 5 {\"'type\": \"relu\", \"name\": \"relu3\", \"inputs\": [\"pool1\"], \"outputs\": [\"'relu1\"],. params\": {}}, 6 {\"type\": \"linear, \"name\": \"fc3\", \"inputs': [\"relu1\"], \"outputs\": [\"pred'], . acy params\": {\"out_size\": 10, \"in_shape\": [12, 12, 20]}}, 7 {\"type\": \"softmax cross_entropy\", \"name\": \"loss\", \"inputs\": [\"pred\", \"label],. \"outputs\": [\"loss\"], \"params\": {}}, 8 {\"type\": \"accuracy\", \"name\": \"acc\", \"inputs\": [\"pred\", \"label''], \"outputs\": ['. accuracy' params\": {}. 'phase'': [''test'1}.\nTable 3: Speed of training LeNet. Processed images per second\nTable 3: Speed of training LeNet. Processed images per second JavaScript environment ConyNetJS Ours Firefox 64 107 node.js 88 4770\nIn this section, we evaluate the CNN training performance of the proposed system. The specifica tions of hardware used for experiments are shown in Table 2.\nFirst, we compared our library and existing deep learning library ConvNetJS by Andrej Karpa thy8, which is written in JavaScript. We evaluated them by training LeNet with MNIST dataset (LeCun et al., 1998b). The network structure is based on LeCun et al. (1998a), which contains two convolutional layers and two fully-connected layers. The batch size is 64. Firefox (version 32) and node.js (version 4.3.0) are used as the JavaScript execution environment. A tiny server application is implemented and used for supplying the dataset and saving the trained model to and from the web browser.\nNext, we trained VGGNet (Simonyan & Zisserman, 2014a) and ResNet (He et al., 2016) as practical scale CNNs. VGGNet is proposed by Simonyan & Zisserman (2014a) at ILSVRC2014. 16-layer version, denoted as VGG16, includes 13 convolutional layers and 3 fully-connected layers. It is among the largest CNNs that are commonly used. ResNet is the winner of ILSVRC2015. 152-layei version, denoted as ResNet152, includes 151 convolutional layers and 1 fully-connected layer, bui the bottleneck structure reduces the number of parameters.\nWe used Caffe (Jia et al., 2014), a popular deep learning library, for comparison. The mainstream. version of Caffe employs NVIDIA CUDA as the interface to GPGPU. We designate this version as Caffe (CUDA). CUDA is not compatible with GPGPUs other than NVIDIA's. Caffe uses cuBLAS for matrix operations such as multiplication. There are forks of Caffe which use OpenCL as an cross-platform GPGPU interface. One such fork is OpenCL-Caffe by AMD9, which uses clBLAS as the matrix operation. Another one is the opencl branch of Caffe by Fabian Tschopp'. It uses ViennaCL11 for matrix operations. In Caffe (CUDA), the cuDNN accelerator library from NVIDIA can also be attached. We used same batch size in the same CNN / GPU setting for fair comparison.\n8http://cs.stanford.edu/people/karpathy/convnetjs/index.html 9https://github.com/amd/OpenCL-caffe 10https://github.com/BvLc/caffe/tree/opencl lhttp://viennacl.sourceforge.net/\nTable 2: Hardware used for the experiments. NVIDIA K80 is recognized as two independent GPGPU chips from software. Performance of the single chip is presented.\nThe measured calculation speed is presented in Table 3. In Firefox, the performance gain was relatively low because the control overhead of GPGPU is dominant in the small CNN. In node.js this overhead is smaller, thus using GPGPU allowed faster computation by a large margin.\nThe training speed is presented in Table 4. By virtue of GPGPU, VGG16 and ResNet152 can be. trained, which was difficult using existing JavaScript based libraries. In ResNet152, more than 1,000. GPGPU kernels are executed and its execution overhead seems to be problematic on Firefox envi- ronment. Currently, our library is not faster than Caffe, but it achieved the same order of speed. Especially, Caffe (CUDA) provides the best performance. This difference mainly comes from the. speed of convolution. Implementation of convolution in Caffe is similar to ours. To perform con.\nTable 4: Training speed of VGG16 and ResNet152 [images/sec]. Batch size is shown in Q. AMD represents AMD FirePro S9170, NVIDIA stands for NVIDIA K80.\nvolution, elements of the input matrix are re-ordered (i.e. lowering). Then the output is gained by matrix multiplication with the weight. Table 4 presents the calculation speed in matrix multiplication used in computation of VGG16. performed by cuBLAS and clBLAS.\nAs the table shows, clBLAS gives inferior speed, especially on gradient computation of layers that. are close to the input layer. In such layers, the matrix shape is far from square. For that reason. performance tuning for such input shape or implementation without matrix multiplication is needed In the CUDA environment, Lavin (2015) showed that 96% of theoretical GPGPU performance is achieved in convolution by circumspect implementation.."}, {"section_index": "5", "section_name": "5.2 DISTRIBUTED TRAINING", "section_text": "The method of distributed training is simple data-parallelism. The system is depicted in Fig. 5 First the server distributes network weight W, and images in a batch. A batch for the iteration (It is divided into N splits, It1, It2, ..., ItN, where N is the number of computing clients. After the client K calculates gradient of weight WtK using assigned batch split, they send the gradient to the server. The server takes the average of the gradients from all clients and then updates the weighi using it (Wt+1 = Wt nWtK). The optimization method is momentum SGD. The result is equivalent regardless of the number of clients.\nFirst. we trained LeNet distributedly in Nexus 7 tablets (Android OS). Chrome browser is used as the client. The batch size is 120 and divided by the clients equally. Figure 6 (left) shows the speedup. according to the increase in the number of clients. Naturally, the absolute speed is slow, but we can. demonstrate that the computational power of mobile devices can be accumulated and nearly linear. speedup is achieved.\nGPU Software VGG16 ResNet152 AMD Ours (on Firefox). 4.0 (32) 1.4 (32) Ours (on node.js). 5.7 (32) 6.5 (32) Caffe (AMD) 7.7 (32) N/A Caffe (Tshopp) 5.3 (32) 1.6 (32) NVIDIA Ours (on Firefox). 2.7 (16) 0.2 (8) Ours (on node.js). 4.9 (16) 2.7 (8) Caffe (Tshopp) 3.2 (16) 1.5 (8) Caffe (CUDA) w/o cuDNN 11.9 (16) 8.5 (8) Caffe (CUDA) with cuDNN 14.4 (16) 9.4 (8) 4000 cuBLAS forward 3500 3000 cuBLAS backward 2500 9 2000 cuBLAS_gradient 1500 clBLAS_forward 1000 clBLAS backward 500 I clBLAS_gradient 0 ^ conv1_1 conv3_1 conv4_ conv5_1\nFigure 4: Calculation speed for each layer's computation in VGG16. Measured on NVIDIA K80 GPU. For example, forward computation of conv1_1 is performed by matrix multiplication of (802816, 27) and (27, 64). Forward, backward, gradient computation of cuBLAS and clBLAS are shown in different bars.\nFigure 5: Data- parallelism system of distributed training\nNext, we train large scale CNN; VGG16. Its weight and gradient have 130 million elements. It therefore requires 500 MB if represented as 32-bit floating point numbers, which poses a large communication bottleneck. To suppress this issue, we implemented 8-bit representation of each element proposed by Dettmers (2016). We used p2.xlarge instance of Amazon Web Services for GPGPU environment. It contains NVIDIA K80 GPU. The batch size is 256 according to (Simonyan & Zisserman, 2014a). Single forward-backward procedure cannot process 256 images at the same time due to the memory limit, so we average the gradients from multiple forward-backward procedure.\nWe show the speed of calculation with respect to the number of computing clients in Fig. 6 (right) Although our main focus is using web browser as clients, the result on using node.js as clients is also shown for reference. Under current settings, use of four clients achieved 2.8 times faster computation than with one client setting. The speed is much faster than existing OpenCL-based Caffe. Due to the communication overhead, the speed saturates at 8 clients even when 8-bit representation is employed.\nAlthough we used K80, a high-end GPU, for this experiment, our motivation is to use ordinary. personal computers for distributed computing. We can assume that latest ordinary personal com- puters (not dedicated for 3D game) have 1/10 performance compared to K80. In K80, we could train VGG16 with 29 seconds per iteration using 8 computers. In 1/10 performance GPU, we can. estimate that maximum speed is 100 seconds per iteration using 16 computers, considering both calculation and network time. We compressed the weight to 1/4 size by the method of Dettmers, if we can compress it to 1/10 further, the maximum speed will be 31 seconds per iteration using 64 computers. Thus, further improvements demand reduction of communications and a better strategy. of parallelism. We leave those improvements as a subject for future work..\nWe implemented a JavaScript based matrix library and deep learning library, to perform deep learn. ing and to develop applications that use a trained model without a dedicated computer system. Using. GPGPU via WebCL, our library provides much better performance than existing JavaScript based. libraries. It became possible to train VGG16 and ResNet152. However, the performance is nol. reaching Caffe running on NVIDIA CUDA environment. A salient difficulty is that matrix multi. plication necessary for convolution is slower. Additionally, we used WebCL as GPGPU interface. but currently it is not included in web browsers. Further improvements in web technology must be. undertaken to make full computing power available to scripts in web pages. In experiments of dis-. tributed training of VGG16 using web browsers as computing client, 2.8x speed improvement was. gained from four clients. The speed is much faster than existing OpenCL-based Caffe using single. computer. The parallelization method used in the experiment is naive, and further exploration of this area will be undertaken as a subject of future work..\n10 14 see 12 see 8 I sa8ewy 10 8bit firefox 6 8 32bit firefox 6 1 8bit node.js 4 4 32bit node.js 2 2 0 0 0 2 4 6 0 2 4 6 8 The number of clients The number of clients\nFigure 6: Computation speed with respect to the number of distributed. clients. Left: speed of training LeNet in Nexus 7 Android tablets. (Chrome browser). Right: speed of training VGG16 in clients with. NVIDIA K80 (Firefox browser / node.js). Measurement includes time of communication and optimization in the server..\nThis work was supported by CREST, JST"}, {"section_index": "6", "section_name": "REFERENCES", "section_text": "David P. Anderson, Jeff Cobb, Eric Korpela, Matt Lebofsky, and Dan Werthimer. SETI@home: ar experiment in public-resource computing. Communications of the ACM, 45:56-61, 2002\nJeffrey Dean, Greg S. Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Quoc V. Le, Mark Z. Mao. Marc Aurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, and Andrew Y. Ng. Large scale distributed deep networks. In NIPS, 2012.\nTim Dettmers. 8-Bit Approximations for Parallelism in Deep Learning. In ICLR, 2016.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In CVPR, 2016\nAlex Krizhevsky. Learning Multiple Layers of Features from Tiny Images, 2o09. Master's Thesis Department of Computer Science, University of Toronto.\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. ImageNet Classification with Deep Con volutional Neural Networks. In NIPS. 2012\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied t document recognition. Proceedings of the IEEE, 86, 1998a..\nYann LeCun, Corinna Cortes, and Christopher J.C. Burges. The mnist database of handwritter digits, 1998b. http://yann.lecun.com/exdb/mnist/.\nEdward Meeds, Remco Hendriks, Said al Faraby, Magiel Bruntink, and Max Welling. MLitB. Machine Learning in the Browser. arxiv:1412.2432. 2014.\nKen Miura and Tatsuya Harada. Implementation of a practical distributed calculation system with browsers and javascript, and application to distributed deep learning. arxiv:1503.05743, 2015.\nKen Miura, Tetsuaki Mano, Atsushi Kanehira, Yuichiro Tsuchiya, and Tatsuya Harada. MILJS : Brand new javascript libraries for matrix calculation and machine learning. arxiv:1502.6064,. 2015. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng. Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei.. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision.. pp. 1-42, April 2015.\nKaren Simonyan and Andrew Zisserman. Two-stream convolutional networks for action recognitior in videos. In NIPS. pp. 568-576, 2014b\nBolei Zhou, Agata Lapedriza, Jianxiong Xiao, Antonio Torralba, and Aude Oliva. Learning Deep Features for Scene Recognition using Places Database. In NIPS, pp. 487-495, 2014"}] |
rkGabzZgl | [{"section_index": "0", "section_name": "DROPOUT WITH EXPECTATION-LINEAR REGULARIZATION", "section_text": "Xuezhe Ma. Yingkai Gao\nLanguage Technologies Institute Carnegie Mellon University"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Deep neural networks (DNNs, e.g., LeCun et al., 2015; Schmidhuber, 2015), if trained properly have been demonstrated to significantly improve the benchmark performances in a wide range of application domains. As neural networks go deeper and deeper, naturally, its model complexity alsc increases quickly, hence the pressing need to reduce overfitting in training DNNs. A number of techniques have emerged over the years to address this challenge, among which dropout (Hinton et al., 2012; Srivastava, 2013) has stood out for its simplicity and effectiveness. In a nutshell, dropout randomly \"drops\"' neural units during training as a means to prevent feature co-adaptation-a sign of overfitting (Hinton et al., 2012). Simple as it appears to be, dropout has led to several record-breaking. performances (Hinton et al., 2012; Ma & Hovy, 2016), and thus spawned a lot of recent interests in analyzing and justifying dropout from the theoretical perspective, and also in further improving dropout from the algorithmic and practical perspective..\nIn their pioneering work, Hinton et al. (2012) and Srivastava et al. (2014) interpreted dropout as an extreme form of model combination (aka. model ensemble) with extensive parameter/weigh sharing, and they proposed to learn the combination through minimizing an appropriate expected loss Interestingly, they also pointed out that for a single logistic neural unit, the output of dropout is in fact the geometric mean of the outputs of the model ensemble with shared parameters. Subsequently many theoretical justifications of dropout have been explored, and we can only mention a few here due to space limits. Building on the weight sharing perspective, Baldi & Sadowski (2013; 2014) analyzec the ensemble averaging property of dropout in deep non-linear logistic networks, and supported the view that dropout is equivalent to applying stochastic gradient descent on some regularizec\nZhiting Hu, Yaoliang Yu\nzhitinghu, yaoliang}@cs.cmu.edu"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Dropout, a simple and effective way to train deep neural networks, has led to a number of impressive empirical successes and spawned many recent theoretical in vestigations. However, the gap between dropout's training and inference phases, in. troduced due to tractability considerations, has largely remained under-appreciated In this work, we first formulate dropout as a tractable approximation of some latent. variable model, leading to a clean view of parameter sharing and enabling further theoretical analysis. Then, we introduce (approximate) expectation-linear dropout neural networks, whose inference gap we are able to formally characterize. Algo rithmically, we show that our proposed measure of the inference gap can be used. to regularize the standard dropout training objective, resulting in an explicit control of the gap. Our method is as simple and efficient as standard dropout. We further prove the upper bounds on the loss in accuracy due to expectation-linearization.. describe classes of input distributions that expectation-linearize easily. Experiments on three image classification benchmark datasets demonstrate that reducing the inference gap can indeed improve the performance consistently..\noss function. Wager et al. (2013) treated dropout as an adaptive regularizer for generalized linea models (GLMs). Helmbold & Long (2016) discussed the differences between dropout and traditiona veight decay regularization. In terms of statistical learning theory, Gao & Zhou (2014) studied th Rademacher complexity of different types of dropout, showing that dropout is able to reduce th Rademacher complexity polynomially for shallow neural networks (with one or no hidden layers) an exponentially for deep neural networks. This latter work (Gao & Zhou, 2014) formally demonstrate that dropout, due to its regularizing effect, contributes to reducing the inherent model complexity, i particular the variance component in the generalization error.\nSeen as a model combination technique, it is intuitive that dropout contributes to reducing the variance of the model performance. Surprisingly, dropout has also been shown to play some role in reducing the model bias. For instance, Jain et al. (2015) studied the ability of dropout training to escape local minima, hence leading to reduced model bias. Other studies (Chen et al., 2014; Helmbold & Long.. 2014; Wager et al., 2014) focus on the effect of the dropout noise on models with shallow architectures We noted in passing that there are also some work (Kingma et al., 2015; Gal & Ghahramani, 2015;. 2016) trying to understand dropout from the Bayesian perspective.\nIn this work, we first formulate dropout as a tractable approximation of a latent variable model and give a clean view of weight sharing ($3). Then, we focus on an inference gap in dropout that has somehow gotten under-appreciated: In the inference phase, for computational tractability considerations, the model ensemble generated by dropout is approximated by a single model with scaled weights, resulting in a gap between training and inference, and rendering the many previous theoretical findings inapplicable. In general, this inference gap can be very large and no attempt (to our best knowledge) has been made to control it. We make three contributions in bridging this gap Theoretically, we introduce expectation-linear dropout neural networks, through which we are able to explicitly quantify the inference gap (s4). In particular, our theoretical results explain why the max-norm constraint on the network weights, a standard practice in training DNNs, can lead to a small inference gap hence potentially improve performance. Algorithmically, we propose to add a sampled version of the inference gap to regularize the standard dropout training objective (expectation linearization), hence allowing explicit control of the inference gap, and analyze the interaction between expectation-linearization and the model accuracy ($5). Experimentally, through three benchmark datasets we show that our regularized dropout is not only as simple and efficient as standard dropout but also consistently leads to improved performance ($6).\nIn this section we set up the notations, review the dropout neural network model, and discuss the inference gap in standard dropout training that we will attempt to study in the rest of the paper.."}, {"section_index": "3", "section_name": "2.1 DNNS AND NOTATIONS", "section_text": "Throughout we use uppercase letters for random variables (and occasionally for matrices as well). and lowercase letters for realizations of the corresponding random variables. Let X E ' be the. input of the neural network, Y E Y be the desired output, and D = {(x1, y1), ..., (x N, yn)} be our. training sample, where x, i = 1, ..., N, (resp. y) are usually i.i.d. samples of X (resp. Y).\nLet M denote a deep neural network with L hidden layers, indexed by l E {1, ..., L}. Let h(l). denote the output vector from layer l. As usual, h(0) = x is the input, and h(L) is the output of the. neural network. Denote = { 0, : l = 1, .. ., L} as the set of parameters in the network M, where assembles the parameters in layer l. With dropout, we need to introduce a set of dropout random. variables S = {r(l) : l = 1, . .., L}, where I(l) is the dropout random variable for layer l. Then the. deep neural network M can be described as:.\nh(l) = fi(h(l-1) (l):0D) =1....,\nwhere O is the element-wise product and fi is the transformation function of layer l. For example, if layer l is a fully connected layer with weight matrix W, bias vector b, and sigmoid activation function layer l with input x and dropout value s, under parameter 0.\nIn the simplest form of dropout, which is also called standard dropout, F(l) is a vector of independen Bernoulli random variables, each of which has probability pi of being 1 and 1 - pi of being O. Thi corresponds to dropping each of the weights independently with probability pt."}, {"section_index": "4", "section_name": "2.2 DROPOUT TRAINING", "section_text": "The standard dropout neural networks can be trained using stochastic gradient decent (SGD), with a. sub-network sampled by dropping neural units for each training instance in a mini-batch. Forward. and backward pass for that training instance are done only on the sampled sub-network. Intuitively dropout aims at, simultaneously and jointly, training an ensemble of exponentially many neural. networks (one for each configuration of dropped units) while sharing the same weights/parameters..\nN 0* = argmin Esp[-l(D, Sp;0)] = argmin Esp logp(yi|Xi, Si;\nDropout has also been shown to work well with regularization, such as L2 weight decay (Tikhonov 1943), Lasso (Tibshirani, 1996), KL-sparsity(Bradley & Bagnell, 2008; Hinton, 2010), and max-norn. regularization (Srebro et al., 2004), among which the max-norm regularization -- that constrains the. norm of the incoming weight matrix to be bounded by some constant - was found to be especiall useful for dropout (Srivastava, 2013; Srivastava et al., 2014)."}, {"section_index": "5", "section_name": "2.3 DROPOUT INFERENCE AND GAP", "section_text": "As mentioned before, dropout is effectively training an ensemble of neural networks with weigh sharing. Consequently, at test time, the output of each network in the ensemble should be averaged t deliver the final prediction. This averaging over exponentially many sub-networks is, however, in. tractable, and standard dropout typically implements an approximation by introducing a deterministic. scaling factor for each layer to replace the random dropout variable:.\nwhere the right-hand side is the output of a single deterministic neural network whose weights are scaled to match the expected number of active hidden units on the left-hand side. Importantly, the right-hand side can be easily computed since it only involves a single deterministic network.\nBulo et al. (2016) combined dropout with knowledge distillation methods (Hinton et al., 2015) to bette. approximate the averaging processing of the left-hand side. However, the quality of the approximatior. in (3) is largely unknown, and to our best knowledge, no attempt has been made to explicitly contro.. this inference gap. The main goal of this work is to explicitly quantify, algorithmically control, anc. experimentally demonstrate the inference gap in (3), in the hope of improving the generalization. performance of DNNs eventually. To this end, in the next section we first present a latent variable. model interpretation of dropout. which will greatly facilitate our later theoretical analysis.\nWith the end goal of studying the inference gap in (3) in mind, in this section, we first formulate dropout neural networks as a latent variable model (LVM) in $ 3.1. Then, we point out the relation between the training procedure of LVM and that of standard dropout in s 3.2. The advantage of formulating dropout as a LVM is that we need only deal with a single model (with latent structure) instead of an ensemble of exponentially many different models (with weight sharing). This much\nThe goal of the stochastic training procedure of dropout can be understood as minimizing an expected. loss function, after marginalizing out the dropout variables (Srivastava, 2013; Wang & Manning. 2013). In the context of maximal likelihood estimation. dropout training can be formulated as.\nwhere recall that D is the training sample, Sp = {S1,..., Sv} is the dropout variable (one for each training instance), and l(D, Sp; 0) is the (conditional) log-likelihood function defined by the conditional distribution p(y|x, s; 0) of output y given input x, under parameter 0 and dropout variable s. Throughout we use the notation Ez to denote the conditional expectation where all random variables except Z are conditioned on.\nEs[H(L)(x,S;0)] ~ h(L)(x,E[S];0)\nsimplified view of dropout enables us to understand and analyze the model parameter 0 in a muc more straightforward and intuitive way.."}, {"section_index": "6", "section_name": "3.1 AN LVM FORMULATION OF DROPOUT", "section_text": "A latent variable model consists of two types of variables: the observed variables that represent the. empirical (observed) data and the latent variables that characterize the hidden (unobserved) structure To formulate dropout as a latent variable model, the input x and output y are regarded as observed. variables, while the dropout variable s, representing the sub-network structure, is hidden. Then upon fixing the input space , the output space ), and the latent space S for dropout variables, the. conditional probability of y given x under parameter 0 can be written as.\np(y[x;0) = p(y[x, s; 0)p(s)d(s)"}, {"section_index": "7", "section_name": "3.2 LVM DROPOUT TRAINING VS. STANDARD DROPOUT TRAINING", "section_text": "Building on the above latent variable model formulation (4) of dropout, we are now ready to poin out a simple relation between the training procedure of LVM and that of standard dropout. Given ar i.i.d. training sample D, the maximum likelihood estimate for the LVM formulation of dropout in (4 is equivalent to minimizing the following negative log-likelihood function:\nN 0* = argmin-l(D;0) = argmin- `l0gp(yixi;0) 0 0 i=1\nl(D;0)<Esp-l(D,Sp;0)]\nTheorem 1, in a rigorous sense, justifies dropout training as a convenient and tractable approximatior of the LVM formulation in (4). Indeed, since directly minimizing the marginalized negative log likelihood in (5) may not be easy, a standard practice is to replace the marginalized (conditional likelihood p(y|x; 0) in (4) with its empirical Monte carlo average through drawing samples from the dropout variable S. The dropout training objective in (2) corresponds exactly to this Monte carlc approximation when a single sample S, is drawn for each training instance (xi, yi). Importantly we note that the above LVM formulation involves only a single network parameter 0, which largely simplifies the picture and facilitates our subsequent analysis."}, {"section_index": "8", "section_name": "EXPECTATION-LINEAR DROPOUT NEURAL NETWORKS", "section_text": "Building on the latent variable model formulation in s 3, we introduce in this section the notior of expectation-linearity that essentially measures the inference gap in (3). We then characterize a general class of neural networks that exhibit expectation-linearity, either exactly or approximately over a distribution p(x) on the input space.\nWe start with defining expectation-linearity in the simplest single-layer neural network, then we extend the notion into general deep networks in a natural way.\nE[f(xOT;0)]-f(xOE[F];0)l=0\nwhere p(y[x, s; 0) is the conditional distribution modeled by the neutral network with configuration s (same as in Eq. (2)), p(s) is the distribution of dropout variable S (e.g. Bernoulli), here assumed to be independent of the input x, and (s) is the base measure on the space S.\nObviously, the condition in (7) will guarantee no gap in the dropout inference approximation (3)-an admittedly strong condition that we will relax below. Clearly, if f is an affine function, then we can choose X' = I' and expectation-linearity is trivial. Note that expectation-linearity depends on the network parameter 0 and the dropout distribution T.\nExpectation-linearity, as defined in (7), is overly strong: under standard regularity conditions essentially the transformation function f has to be affine over the set l', ruling out for instance the popular sigmoid or tanh activation functions. Moreover, in practice, downstream use of DNNs are usually robust to small errors resulting from approximate expectation-linearity (hence the empirical success of dropout), so it makes sense to define an inexact extension. We note also that the definition in (7) is uniform over the set ', while in a statistical setting it is perhaps more meaningful to have expectation-linearity \"on average,\" since inputs from lower density regions are not going to play a significant role anyway. Taking into account the aforementioned motivations, we arrive at the following inexact extension:\nE x Er[f(XoT;0)|X] f(XoE[F];0)]]\nTo appreciate the power of cutting some slack from exact expectation-linearity, we remark that. even non-affine activation functions often have approximately linear regions. For example, the logistic function, a commonly used non-linear activation function in DNNs, is approximately linear around the origin. Naturally, we can ask whether it is sufficient for a target distribution p(x) to be well-approximated by an approximately expectation-linearizable one. We begin by providing an appropriate measurement of the quality of this approximation..\nDefinition 3 (Closeness, (Andreas et al., 2015)). A distribution p(x) is C-close to a set X' C X i\nE inf sup||XOy-x*Oy] x*EX YES\nwhere recall that S is the (bounded) space that the dropout variable lives in.\nIntuitively, p(x) is C-close to a set X' if a random sample from p is no more than a distance C. from A' in expectation and under the worst \"dropout perturbation\". For example, a standard normal. distribution is close to an interval centering at origin ([-a, a]) with some constant C. Our definition. of closeness is similar to that in Andreas et al. (2015), who used this notion to analyze self-normalized log-linear models.\nWe are now ready to state our first major result that quantifies approximate expectation-linearity of a single-layered network (proof in Appendix B.1):.\nRoughly, Theorem 2 states that the input distribution p(x) that place most of its mass on regions close to expectation-linearizable sets are approximately expectation-linearizable on a similar scale The bounded operator norm assumption on the derivative f is satisfied in most commonly used layers. For example, for a fully connected layer with weight matrix W, bias vector b, and activation function o, ||V f)llop = |o'()l : ||w|lop is bounded by l|W|lop and the supremum of |o')| (1/4) when o is sigmoid and 1 when o is tanh)..\nNext, we extend the notion of approximate expectation-linearity to deep dropout neural networks\nE x |Es[H(L)(X,S;0)|X] -hL)(X,E[S];0)]\nwhere h(L) (X, E[S]; 0) is the output of the deterministic neural network in standard dropout\nTheorem 2. Given a network layer h = f(xOy; 0), where 0 is expectation-linearizing w.r.t. X' C X Suppose p(x) is C-close to X' and for all x E X, I|xf(x)llop B, where II : llop is the usual. operator norm. Then, p(x) is 2BC-approximately expectation-linearizable..\nLastly, we relate the level of approximate expectation-linearity of a deep neural network to the leve of approximate expectation-linearity of each of its layers:.\n1- (By)L-1 A=(By)L- -+$ +$ + By 1 By\nAccording to the theorem, the operator norm of the derivative of each layer's transformation function is an important factor in the level of approximate expectation-linearity: the smaller the operator norm is, the better the approximation. Interestingly, the operator norm of a layer often depends on the norm of the layer's weight (e.g. for fully connected layers). Therefore, adding max-norm constraints to regularize dropout neural networks can lead to better approximate expectation-linearity hence smaller inference gap and the often improved model performance."}, {"section_index": "9", "section_name": "EXPECTATION-LINEAR REGULARIZED DROPOUT", "section_text": "In the previous section we have managed to bound the approximate expectation-linearity, hence the inference gap in (3), of dropout neural networks. In this section, we first prove a uniform deviation bound of the sampled approximate expectation-linearity measure from its mean, which motivates adding the sampled (hence computable) expectation-linearity measure as a regularization scheme to standard dropout, with the goal of explicitly controlling the inference gap of the learned parameter hence potentially improving the performance. Then we give the upper bounds on the loss in accuracy due to expectation-linearization, and describe classes of distributions that expectation-linearize easily\nWe now show that an expectation-linear network can be found by expectation-linearizing the network. on the training sample. To this end, we prove a uniform deviation bound between the empirica expectation-linearization measure using i.i.d. samples (Eq. (12)) and its mean (Eq. (13))..\nTheorem 4. Let H = {h(L)(x, s; 0) : 0 E O} denote a space of L-layer dropout neural networks indexed with 0, where h(L) : X S -> R and O is the space that lives in. Suppose that the neural networks in H satisfy the constraints: 1) Vx E X, ||x||2 a; 2) Vl E {1, .:., L}, E(T(l)) y and || fi|lop B; 3) ||h(L)|| . Denote empirical expectation-linearization measure and its mean as:\nn ||Es,[HL)(X,S;0)]-h(L)(X,E[Si];0)| i=1 (L)\n= Ex |Es[HL)(X, S;0)] -hL)(X,E[S];0)]\n2aBL( log(1/v) sup ] 0eO n\nFrom Theorem 4 (proof in Appendix C.1) we observe that the deviation bound decreases exponentially. with the number of layers L when the operator norm of the derivative of each layer's transformation\nFrom Theorem 3 (proof in Appendix B.2) we observe that the level of approximate expectation linearity of the network mainly depends on four factors: the level of approximate expecatation- linearity of each layer (), the expected variance of each layer (o), the operator norm of the derivative of each layer's transformation function (B), and the mean of each layer's dropout variable (). In practice, y is often a constant less than or equal to 1. For example, if F ~ Bernoulli(p), then y = p\nIt should also be noted that when By < 1, the approximation error tends to be a constant when the network becomes deeper. When By = 1, grows linearly with L, and when By > 1, the growth of becomes exponential. Thus, it is essential to keep By < 1 to achieve good approximation, particularly for deep neural networks.\nfunction (B) is less than 1 (and the contrary if B > 1). Importantly, the square root dependenc on the number of samples (n) is standard and cannot be improved without significantly stronge assumptions.\nIt should be noted that Theorem 4 per se does not imply anything between expectation-linearizatior. and the model accuracy (i.e. how well the expectation-linearized neural network actually achieve. on modeling the data). Formally studying this relation is provided in $ 5.3. In addition, we provide. some experimental evidences in s 6 on how improved approximate expectation-linearity (equivalently smaller inference gap) does lead to better empirical performances.\nN |Es,[HL)(xi,Si;0)]-h(L)(xi,E[S;];0)] N i=1\nSo far our discussion has concentrated on the problem of finding expectation-linear neural network. models, without any concerns on how well they actually perform at modeling the data. In this section we characterize the trade-off between maximizing \"data likelihood' and satisfying an expectation. linearization constraint.\nTo achieve the characterization, we measure the likelihood gap between the classical maximum likelihood estimator (MLE) and the MLE subject to a expectation-linearization constraint. Formally given training data D = {(x1, y1), ..., (xn, Yn)}, we define.\nargmin -l(D;0 0ee 9s argmin -l(D;0) 0eO,v(D;0)<8\nwhere -l(D; 0) is the negative log-likelihood defined in Eq. (5), and V(D; 0) is the level of approxi mate expectation-linearity in Eq. (16).\nIn the following, we focus on neural networks with softmax output layer for classification tasks\np(y|x,s;0) =h(L)(x,s;0) = fL(h(L-1)(x,s);n) ,h(L-1)(x,s)\nThe uniform deviation bound in Theorem 4 motivates the possibility of obtaining an approximately expectation-linear dropout neural networks through adding the empirical measure (12) as a regular-. ization scheme to the standard dropout training objective, as follows:.\nN 1 |h(L)(xi,si;0) - h(L)(xi, E[S;];0)|| V(D;0 N i=1\nwhere s; is the same dropout sample as in l(D; 0) for each training instance in a mini-batch. Thus,. the only additional computational cost comes from the deterministic term h(L) (x, E[S]; 0). Overall, our regularized dropout (15), in its Monte carlo approximate form, is as simple and efficient as the standard dropout.\nWe would like to control the loss of model accuracy by obtaining a bound on the likelihood gap defined as:\n1 Aj(0,08) (l(D;)-l(D;0s) n\n2 t(0,0s) < c1B2 l0|l2 - 43\nFrom Theorem 5 (proof in Appendix C.2) we observe that, at one extreme, distributions closed tc deterministic can be expectation-linearized with little loss of likelihood..\nTheorem 6 (proof in Appendix C.3) indicates that uniform distributions are also an easy class for expectation-linearization.\nThe next question is whether there exist any classes of conditional distributions p(y[x) for which al distributions are provably hard to expectation-linearize. It remains an open problem and might be ar Interesting direction for future work.\nIn this section, we evaluate the empirical performance of the proposed regularized dropout in (15) on. a variety of network architectures for the classification task on three benchmark datasets-MNIST CIFAR-10 and CIFAR-100. We applied the same data preprocessing procedure as in Srivastava et al.. (2014). To make a thorough comparison and provide experimental evidence on how the expectation. linearization interacts with the predictive power of the learned model, we perform experiments of. Monte Carlo (MC) dropout, which approximately computes the final prediction (left-hand side of. (3)) via Monte Carlo sampling, w/o the proposed regularizer. In the case of MC dropout, we average. m = 100 predictions using randomly sampled configurations. In addition, the network architectures and hyper-parameters for each experiment setup are the same as those in Srivastava et al. (2014). unless we explicitly claim to use different ones. Following previous works, for each data set We. held out 10,oo0 random training images for validation to tune the hyper-parameters, including . in Eq. (15). When the hyper-parameters are fixed, we train the final models with all the training. data, including the validation data. A more detailed description of the conducted experiments can be. provided in Appendix D. For each experiment, we report the mean test errors with corresponding. standard deviations over 5 repetitions"}, {"section_index": "10", "section_name": "6.1 MNIST", "section_text": "The CIFAR-10 and CIFAR-100 datasets (Krizhevsky, 2009) consist of 60,000 color images of size 32 32, drawn from 10 and 100 categories, respectively. 50,000 images are used for training and the\nWhat about the other extreme - distributions \"as close to uniform distribution as possible'? With suitable assumptions about the form of p(y[x, s; 0) and p(y|x; 0), we can achieve an accuracy loss. bound for distributions that are close to uniform:.\n8 i(0,08) E[KL (p(|X;0)]|Unif(V))] 4|||2\nThe MNIST dataset (LeCun et al., 1998) consists of 70,000 handwritten digit images of size 28 28 where 60,o00 images are used for training and the rest for testing. This task is to classify the images into 10 digit classes. For the purpose of comparison, we train 6 neural networks with different architectures. The experimental results are shown in Table 1.\nFigure 1: Error rate and empirical expectation-linearization risk relative to X.\nFrom Table 1 we can see that on MNIST data, dropout network training with expectation-linearization outperforms standard dropout on all 6 neural architectures. On CIFAR data. expectation-linearization reduces error rate from 12.82% to 12.20% for CIFAR-10, achieving 0.62% improvement. For CIFAR-100, the improvement in terms of error rate is 0.97% with reduction from 37.22% to 36.25%\nFrom the results we see that with or without expectation-linearization, the MC dropout networks. achieve similar results. It illustrates that by achieving expectation-linear neural networks, the. predictive power of the learned models has not degraded significantly. Moreover, it is interesting tc. see that with the regularization, on MNIST dataset, standard dropout networks achieve even bette. accuracy than MC dropout. It may be because that with expectation-linearization, standard dropou inference achieves better approximation of the final prediction than MC dropout with (only) 100. samples. On CIFAR datasets, MC dropout networks achieve better accuracy than the ones with. the regularization. But, obviously, MC dropout requires much more inference time than standard. dropout (MC dropout with m samples requires about m times the inference time of standard dropout).\nIn this section, we explore the effect of varying the hyper-parameter for the expectation-linearization rate X. We train the network architectures in Table 1 with the value ranging from O.1 to 10.0 Figure 1 shows the test errors obtained as a function of X on three datasets. In addition, Figure 1 middle and right panels, also measures the empirical expectation-linearization risk of Eq. (12) with varying on CIFAR-1O and CIFAR-100, where is computed using Monte carlo with 100 independent samples.\nTable 1: Comparison of classification error percentage on test data with and without using expectation linearization on MNIST, CIFAR-1O and CIFAR-100, under different network architectures (with standard deviations for 5 repetitions).\nw.o. EL w. EL Data Architecture Standard MC Standard MC - 3 dense,1024,logistic 1.230.03 1.060.02 1.070.02 1.060.03 3 dense,1024,relu 1.190.02 1.040.02 1 1.030.02 1.050.03 3 dense,1024,relu+max-norm 1.050.03 1.020.02 0.980.03 1.020.02 MNIST 3 dense,2048,relu+max-norm 1.070.02 1.000.02 0.940.02 0.970.03 2 dense,4096,relu+max-norm 1.030.02 0.920.03 0.900.02 0.930.02 2 dense.8192,relu+max-norm 0.990.02 0.960.02 0.870.02 0.920.03 1 CIFAR-10 3 conv+2 dense,relu+max-norm 12.820.10 12.160.12 12.200.14 12.210.15 CIFAR-100 3 conv+2 dense,relu+max-norm 37.220.22 36.010.21T 36.250.12 36.100.18 MNIST CIFAR-10 0.015< CIFAR-100 1.4 0.10<4 38.0 13.1 0.014 1.3 13.0 37.8 0.09 12.9 0.013 37.6 lezt 0.08 1.2 12.8 flnee 37.4 0.012 12.7 37.2 0.07 rrate rate 1.1 12.6 0.011 oaaoon Err IOJ. 12.5 0.06 model1 0.010 36.8 1.0 6 12.4 expee model2 36.6 0.05 model3 12.3 0.009 0.9 model4 12.2 36.4 0.008 0.04 model5 12.1 36.2 mndm model6 Error rate Error rate 0.8 12.0 0.007 36.0 0.03 0 1 3 4 5 6 7 9 10 0 1 3 4 5 7 9 10 0 2 2 8 2 7 4 5 6 7 8 9 10 1\n0.09 0.08 0.07 0.06 0.05 0.04 0.03\nrest for testing. The neural network architecture we used for these two datasets has 3 convolutional layers, followed by two fully-connected (dense) hidden layers (again, same as that in Srivastava et al (2014)). The experimental results are recorded in Table 1, too.\nFrom Figure 1 we can see that when increases, better expectation-linearity is achieved (i.e. decreases). The model accuracy, however, has not kept growing with increasing , showing that in practice considerations on the trade-off between model expectation-linearity and accuracy are needed\nTable 2: Comparison of test data errors using standard dropout, Monte Carlo dropout, standard dropout with our proposed expectation-linearization, and recently proposed dropout distillation on CIFAR-10 and CIFAR-100 under AllConv, (with standard deviations for 5 repetitions)."}, {"section_index": "11", "section_name": "6.4 COMPARISON WITH DROPOUT DISTILLATION", "section_text": "To make a thorough empirical comparison with the recently proposed Dropout Distillation. method (Bulo et al., 2016), we also evaluate our regularization method on CIFAR-10 and CIFAR-100 datasets with the All Convolutional Network (Springenberg et al., 2014) (AllConv). To facilitate. comparison, we adopt the originally reported hyper-parameters and the same setup for training..\nTable 2 gives the results comparison the classification error percentages on test data under AllConv. using standard dropout, Monte Carlo dropout, standard dropout with our proposed expectation linearization, and recently proposed dropout distillation on CIFAR-10 and CIFAR-100 1. According. to Table 2, our proposed expectation-linear regularization method achieves comparable performance to dropout distillation."}, {"section_index": "12", "section_name": "7 CONCLUSIONS", "section_text": "In this work, we attempted to establish a theoretical basis for the understanding of dropout, motivate by controlling the gap between dropout's training and inference phases. Through formulating dropou as a latent variable model and introducing the notion of (approximate) expectation-linearity, we hav formally studied the inference gap of dropout, and introduced an empirical measure as a regularizatio. scheme to explicitly control the gap. Experiments on three benchmark datasets demonstrate tha. reducing the inference gap can indeed improve the end performance. In the future, we intenc. to formally relate the inference gap to the generalization error of the underlying network, henc. providing further justification of regularized dropout.."}, {"section_index": "13", "section_name": "ACKNOWLEDGEMENTS", "section_text": "This research was supported in part by DARPA grant FA8750-12-2-0342 funded under the DEFT program. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA."}, {"section_index": "14", "section_name": "REFERENCES", "section_text": "David M Bradley and J Andrew Bagnell. Differential sparse coding. 2008\n1We obtained similar results as that reported in Table 1 of Bulo et al. (2016) on CIFAR-10 corpus, while we cannot reproduce comparable results on CIFAR-100 (around 3% worse)\nPierre Baldi and Peter J Sadowski. Understanding dropout. In Advances in Neural Information Processing Systems, pp. 2814-2822, 2013.\nAlex Krizhevsky. Learning multiple layers of features from tiny images, 2009\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied tc document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998\nYann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521:436-444, 2015\nXuezhe Ma and Eduard Hovy. End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF In Proceedings of ACL-2016, pp. 1064-1074, Berlin, Germany, August 2016.\nJost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving fo. simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014\nNathan Srebro, Jason Rennie, and Tommi S Jaakkola. Maximum-margin matrix factorization. In Advances in neural information processing systems, pp. 1329-1336, 2004.\nNitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdino Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929-1958, 2014.\nRobert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistica Society. Series B (Methodological), pp. 267-288, 1996..\nAndrey Nikolayevich Tikhonov. On the stability of inverse problems. In Dokl. Akad. Nauk SSsR volume 39, pp. 195-198, 1943.\nYarin Gal and Zoubin Ghahramani. A theoretically grounded application of dropout in recurrent\nDiederik P Kingma, Tim Salimans, and Max Welling. Variational dropout and the local reparameteri zation trick. In Advances in Neural Information Processing Systems, pp. 2575-2583, 2015.\nurgen Schmidhuber. Deep learning in neural networks: An overview. Neural Networks, 61:85-117 2015.\nNitish Srivastava. Improving neural networks with dropout. PhD thesis, University of Toronto, 2013\nStefan Wager, William Fithian, Sida Wang, and Percy S Liang. Altitude training: Strong bounds for single-layer dropout. In Advances in Neural Information Processing Systems. pp. 100-108. 2014\nSida Wang and Christopher Manning. Fast dropout training. In Proceedings of the 3Oth Internationa Conference on Machine Learning. pp. 118-126. 2013..\nProof of Theorem 1\nN Esp[l(D,Sp;0)] 1 p(Si logp(yi|Xi, Si;0) d(s1)...d(sN p(Si) logp(yi|Xi, Si; 0)d(Si\nBecause log(.) is a concave function, from Jensen's Inequality\np(s) logp(y[x, s; 0)d(s) log p(s)p(y|x, s;0)d(s\nN Esp-l(D,Sp;0)]> (si)p(yi|xi, Si;0)d(si) = -l(D;0 i=1\nProof. Let y* = E[r], and\nXoy=X*Oy+X-Oy\nIn the following, we omit the parameter 0 for convenience. Moreover, we denote\nEr[f(X OT;0)] E[f(XOT;0)|X]\nFrom Taylor Series, there exit some X', X\" E ' satisfy that\nf(X oT) f(X*or)+f(x'or)(x-or f(X Oy*) f(X*Oy*)+f'(x\"oy*)(X-O\nEr[f(X oT) - f(X Oy*) Er[f(X*oT+X-or)-f(X*oy*+X-Oy*)] Er[f(X*oT)-f(X*Oy*)+f'(X'oF)(X-oT)- f'(X\"Oy*)(X-Oy*)] Er[f(X*oF)- f(X*Oy*)]+Er[f'(X'oF)(X- oT)-f'(x\"oy*)(X-O*)\nEr[f(X o T) - f(X o y*)] Er[f'(x'oT)(X-oT)- f'(x\" oy*)(X-Oy*)] Er[(f'(X'o)-f'(X\"Oy*))(X-o)]+Er[f'(X\"Oy*)(X-oT-X-Oy*)] Er[(f'(X'oT) - f'(X\" oy*))(X-or)]\nA{x:|E[f(xOT;0)]-f(xOy*;0)||2=0}\nEr[f(x* oT) - f(x*oy*)]= 0\nFinally we have\nE x |Er[(f'(x'or)-f'(x\"oy*))(X- or)ll2 2BE inf sup|X Oy- x O y2 < 2BC xEAyES\nE x |Est [H(L)(X, SL)] - h(L)(X,E[SL])|]2 A1\n- By)4 AL=(By)-$+($+ Byo 1 - B\n|Er(L+1)[fL+1(H(L) Or(L+1))] - fL+1(H(L) EH(L)\nEs[HL)(X,S;0)]=Es[H(L)(X,S;0)X\nE x Es1 HL+D o r(L+1)) fL+1(H(L) fL+1(h(D) Er(L +1(H(L) oT(L+1) f1+1(h L+1(h(L)\nFrom Eq. 2 and Jensen's inequality, we have\nEr[(f'(x'or)- f'(x\"oy*)(x-or)]ll) Er[llf'(X'oT)- f'(X\" Oy*)||op|X- OT||2 2BEr I|x- or2 2B inf sup|X O y- x Oy2 xEAyES\nEr(L+1)[fL+1(H(L) OF(L+1))] - fL+1(h(L) (L+1)[fL+1(H(L) OF(L+1)] - fL+1(h(L) O 0\nE +1(h(L) H(L) +fL+1(Es.. TH (L+1) fL+1 E x Es,H(L 1(EsH) fL+1Ch.\nfL+1(EsLHL)] o] V fL+1(EsL[H(L)] BEH(L) H(L) _Est[H(L) < |H(L)_Est[H(L)]?] EH(L)\nEx ESL+1 [H(L+1)] _ h(L+1)] 8 + Byo + ByL (By)$+($+ By\nBefore proving Theorem 4. we first define the notations\nIn addition, we import the definition of dropout Rademacher complexity from Gao & Zhou (2014)\n10ih(X n Rn(H,Xn,Sn) E. sup n hEH i=1 Rn(H) Exn Radn(H, Xn, Sn Sn\nEx fL+1(Es1H(L) +1 fL+1(h(L) ByE x Es[H(L)]h(L)|\nn Radn(F, Xn) = E. sup fEF n i=1\nRadn(F) = Exn Radn(F,Xn)\nwhere H : S -> R is a function space defined on input space and dropout variable space S. Rn(H, Xn, Sn) and Rn(H) are the empirical dropout Rademacher complexity and dropout\nNow. we define the following function spaces:"}, {"section_index": "15", "section_name": "Proof.", "section_text": "Rn(H,Xn) = Esn Raa sup n hEH 0;h(Xi,Si)) F sup n hEH Eo sup Esr 0;h(Xi,Si)) > hEH n E. 0;Es,[h(Xi,Si) sup r hEH =1 E. sup 1 0;Es,[HL)(X,Si; Radn(F,Xn) hEH =1\nFrom Lemma 7, we have Radn(F) < Rn(H)\nRn(H) aBL < Radn(9) < a B n\nProof. See Theorem 4 in Gao & Zhou (2014)\nNow, we can prove Theorem 4"}, {"section_index": "16", "section_name": "Proof of Theorem 4", "section_text": "log(1/) sup|- ]< 2Radn(V) + vEV n\naBL(yL/2+1 Radn(V) = Radn(F - G) < Radn(F) + Radn(9) <\n2aBL(yL/2 +1) log(1/) sup-] 0e0 n\nf(x;0) : f(x;0) = Es H( g(x;0) : g(x;0) =h(L)(x,E[S];0),0 E H h(x,s;0) : h(x,s;0) = h(L)(x,s;0), 0 n space of v(x) = f(x)-g(x) is V ={f(x) -g(x) : f E F,g E G}\nlVfL(;n) op < 2l[nl2\nProof. denote\nK A=VfL(;n)=py(ny-n) here py =p(y|x,s;0), n = E[ny] = Pyny. K y=1\nFrom lemma 9, we have ILfL(x; n) yll2. Then fi.(u: n) l? 2lnl3l?\nEst[HL]-hL| |Esz-1 [fL(H(L-1);n)]- fL(h(L-1);n)|l, Es-1|fz(H(L-1);n)-fL(h(L-1);n)| < VI 2||n|2|H(L-1) _h(L-1)j] < 4B||n||2 < S\nLemma 10 says that we can get 0 satisfying the expectation-linearization constrain by explicitly scaling down n while keeping ..\nIn order to prove Theorem 5, we make the following assumptions\nThe dimension of h(L-1) is d, i.e. h(L-1) E Rd. Since Vy E V,p(y|x; 0) > 0, we assume p(y|x; 0) 1/b, where b > [V] = k As in the body text, let p(y|x, s; 0) be nonuniform, and in particular let nTh(L-1)(x.s: X) > c|(|l2, Vy y* hL-1\nLemma 11. If p(y|x; 0) > 1/b, then Va E [0. 1], for parameter 0 = {X, an}. we hav\n1 p(y|x;0) > 1\nFor convenience, we denote X = {01,..., 0L-1}. Then 0 = {, n}, and MLE = {, } Lemma 9. l|V fL(;n)T lop 2|nl|2 (7) Proof. denote A =VfL(;n)T =[pu(ny where py =p(y|x,s;0),n =E[ny] = PyNy y=1 For each v such that ||v||2 = 1, Av|l? l|py (ny-n) l2|ly|l?= l|py (y-n) lI2 yEV yE) yEV py||ny -nll? 2py l|n|l2+ Py'l|ny'|I2 yEV yEV y' E) 4 py||ny|I2 < 4|n|I2 yEV So we have ||A||op 2||n||2 Lemma 10. If parameter 0 = {, n} satisfies that |/nll2 46, then V(D; 0) 8, where V(D; 0) is defined in Eq. (16). Proof. Let St = {r(1),...,r(L)}, and let H(l) and h(l) be short for H(l)(X, St;0) and h(l)(X. E(SD): 0), respectively.\n|Au|2 l|py (nyn) ll2l|yll?= l|py (ny-n) lI2 yE yEV yE) py|lny-n|l? 2py l|n|l2+ Py'lIny'll2 yEV yE) y'EV 4 L py|nyll2< 4||nll2 yEV\nProof. Let St = {r(1),...,T(L)}, and let H(l) and h(l) be short for H()(X, St;0) and h(l)(X, E(S); 0), respectively.\nh(L-1)(x,s;) if i = y 0 otherwise\nProof. We define\nSince Y = {1, ..., k}, for fixed x E X, s E S, log f(a) is a concave function w.r.t a. Since b > k, we have\nlog f(a) > (1 a) log f(0) + a log f(1) > - log 6\n1 p(y|x;0) = Es p(y[x, S; 0\np(y|x,s,0) < e-cal|nll2\nLemma 13. For a fixed x and s, the absolute value of the entry of the vector under the paramete\np(y|x, s;0)(uy Ey[uy])]i < B(k - 1)e-ca|n|2\nProof. Suppose y is the majority class of p(y[x, s: 0). Then\nUy-Ey[uy=\n(1 p(y|x,s;0)h(L-1) if y = y* -p(y|x,s;0)h(L-1) otherwise\np(y|x, s;0)(uy - Ey[uy])|i |(uy - Ey[uy])|i (k-1)e-ca|nll2\nNow, we suppose y is not the majority class of p(y|x, s; 0). Then.\n|p(y|x, s; 0)(uy - Ey[uy])|i p(y|x,s; 0) < e-ca||nll2\nOyerall. the lemma follows\nLemma 14. We denote the matrix\nA A Es pyx y - Ey[uy])(uy- Ey[uy] p(y|x;0 p(y|x,s;0 p(y|x,s;0 H p(yx;0 p(yx;0\nThen the absolute value of the entr of A under the arameter 0 = {\\, an}.\nxnh(L-1) f(a) =(y|x,s;0) = anT,h(L-1)(x,s; h(L-1)(x,S; y'EV y'EJ\np(y|x,s,0) = e-ca||nll2 y'EV\nAij] 2b(k-1)32e-ca|nll2\nProof. From Lemma 11, we have p(y|x; 0) 1/b. Additionally, the absolute value of the entry o uu Ey[uy] is bounded by 3. We have for each i.\np(y|x, s;0 p(y|x,s;0) E c Ey[uy < Es 1 p(y[x;0 p(y|x;0\nBijl 2(k-1)32e-ca|nllz\nProof. We only need to prove that for fixed x and s, for each i, j.\nEy [uyu] - Ey[uy]Ey[uy]T[z, 2(k-1)2e-ca|n|2\nSuppose y is the majority class. Then from Lemma 12\nIf y is not the majority class. Then\np(y|x, s;0) - p(y|x, s;0)2 p(y|x, s;0) < e-ca||nll2\nk p(y|x,s;0) - p(y|x,s;0)2 2(k-1)e-ca|nll2 y=1\nThe lemma follows\nn 1 n i=1\n2dk(k - 1)(b + 1)32e-ca|nll\nNow, we can prove Theorem 5 by constructing a scaled version of 0 that satisfies the expectation linearization constraint.\nAij] 2b(k-1)32e-ca||nll2\np(y|x, s;0) B = Es (Eyuyuy]- Ey[uy]Ey[uy]T p(y|x;0)\n|Ey [uyuY]-Ey[uy]Ey[uy]|, =|Covy[(uy)i,(uy)j]| < 2p(y|x,s;0)-p(y|x,s;0)2 y I\n(y|x,s;0) - p(y|x, s;0)2 <1-p(y|x, s;0) < (k-1)e-ca|nll2"}, {"section_index": "17", "section_name": "Proof of Theorem 5", "section_text": "0,0s)<(0,0)=(l(D;0-l(D;0))=g(,n-gX,a\ng(,an)= g(,n) + Vg(,n)(an-n) +(an-n)V?g(,n(an- n\nSince 0 is the MLE, the first-order term T g(, n)(an - n) = 0. The Hessian in the second-order term is just Eq.(8). Thus, from Lemma 16 we have.\ng(X,n) - (1 - a)2l|nll22dk(k - 1)(b + 1)2e-ca|nll g(,n) - 2dk(k-1)(b+ 1)2 lnll2 43 (l|n|l2 - 43 C1 B2 e-c2s/43\nwith setting c1 = 2dk(k - 1)(b + 1) and c2 = c. Then the theorem follows\nIn the following, we denote 0 = {X, an}\nProof. This lemma can be regarded as a corollary of Lemma 11\np(y[x,s,0) = ,h(L-1) 1)(x,s;X) L(Wy') y'EV y'EV\ndgs(a) dp(y'x,s;0) log Wy'! -Vary [logwy[X - x, S = s] da da y'EV\nm(s,y) = logp(y(x, s;0) - Ey log p(Y|x, s;0)\n1 p(y|x,s;0) >\ngs(a) 0\nlogp(Y|xi, s;6 log p(yi|xi, s; 0\nLemma 19. If y satisfies Lemma 17 and qs(Q) > 0, then\nVary [m(s, Y)] m(s, y)\nProof. First we have\nm(s,y) = logp(y[x, s;0) - log1/k - K L p(|x, s;0)|Unif(V) < 0\nThen the lemma follows\nVary[m(s, Y)] m(s, yi)\nlogp(ya|xi:{X,an}) x) logp(yi|xi;{\\,0})+a logp(yi|xi;{,n})\nProof. We define\nf(a) = logp(yi|xi;{, an} Qlogp(y|xi:{\\, log p(ualx::3 A.0)\nV2fa) =-Es|Y=yi [Vary [m(S, Y)]] + Vars|Y=y; [m(S, yi)]\nEs|Y=y; [Vary [m(S, Y)]] Es|Y=yi [] [m(S,yi)2] Vars|Y=yi [m(S, yi)]\nSo we have 2 f(a) < 0. The lemma follows"}, {"section_index": "18", "section_name": "Proof of Theorem 6", "section_text": "(Vary [m(s, Y)])1/2 log p(Y|x,s;0)-Eylogp(Y|x,s;0) log p(Y|x,s;0) -Eylog p(Y|x,s;0) Ey Ey KL (p(|x, s;0)|Unif(V)) + log1/k- logp(Y|x,s;0) Ey KL(p(|x,s;0)|Unif(V)) +log1/k- logp(Y|x,s;0) KL (p(|x,s;0)|Unif(V)) +Eylogp(Y|x,s;0) -log1/k 2KL(p(|x,s;0)|Unif(V)\n2KL (p(|x,s;0)|Unif(V)) KL(p(|x,s;0)|Unif(V))+log1/k-logp(y|x,s;0) =-m(s,y)\nNow, we can prove Theorem 6 by using the same construction of an expectation-linearizing parameter as in Theorem 5.\n(0,0s)<(0,0)= (l(D;0)-l(D;0) n\nFrom Lemma 20 we have\nMNIST For MNIST, we train 6 different fully-connected (dense) neural networks with 2 or 3 layers (see Table 1). For all architectures, we used dropout rate p = 0.5 for all hidden layers and p = 0.2. for the input layer.\nCIFAR-10 and CIFAR-100 For the two CIFAR datasets, we used the same architecture in Srivas. tava et al. (2014) -- three convolutional layers followed by two fully-connected hidden layers. Th. convolutional layers have 96, 128, 265 filters respectively, with a 5 5 receptive field applied with stride of 1. Each convolutional layer is followed by a max pooling layer pools 3 3 regions at stride.. of 2. The fully-connected layers have 2048 units each. All units use the rectified linear activatior. function. Dropout was applied to all the layers with dropout rate p = (0.1, 0.25, 0.25, 0.5, 0.5, 0.5. for the layers going from input to convolutional layers to fully-connected layers.."}, {"section_index": "19", "section_name": "D.2 NEURAL NETWORK TRAINING", "section_text": "Neural network training in all the experiments is performed with mini-batch stochastic gradien descent (SGD) with momentum. We choose an initial learning rate of no, and the learning rate is. updated on each epoch of training as nt = no/(1 + pt), where p is the decay rate and t is the numbe. of epoch completed. We run each experiment with 2,o00 epochs and choose the parameters achieving. the best performance on validation sets.\nTable 3 summarizes the chosen hyper-parameters for all experiments. Most of the hyper-parameters. are chosen from Srivastava et al. (2014). But for some experiments, we cannot reproduce the. performance reported in Srivastava et al. (2014) (We guess one of the possible reasons is that we used. different library for implementation.). For these experiments, we tune the hyper-parameters on the. validation sets by random search. Due to time constrains it is infeasible to do a random search across the full hyper-parameter space. Thus, we try to use as many hyper-parameters reported in Srivastava. et al. (2014) as possible."}, {"section_index": "20", "section_name": "D.3 EFFECT OF EXPECTATION-LINEARIZATION RATE", "section_text": "Table 4 illustrates the detailed results of the experiments on the effect of X. For MNIST, it lists the error rates under different X values for six different network architectures. For two datasets of CIFAR it gives the error rates under different values, among with the empirical expectation-linearization risk .\n(D;0)=l(D;{,an})>(1a)l(D;{,0})+al(D;{,n}\nl(D;0)-l(D;{,0}) 0,0s) 1-a) (1) logp(yi|xi; 0) log Unif(V (1 a)E [KL(p(X;0)]|Unif(V))] E[KL(p(|X;0)|Unif())] 4||n|2\nTable 3: Hyper-parameters for all experiments\nTable 4: Detailed results for experiments on the effect of X\nExperiment Hyper-parameter batch size 200 initial learning rate no. 0.1 decay rate p 0.025 MNIST momentum 0.9 momentum type standard max-norm constrain 3.5 10 100 batch sizet 100 100 initial learning rate no for conv layers. 0.001 0.001 initial learning rate no for dense layers 0.1 0.02 CIFAR decay rate p 0.005 0.005 momentum 0.95 0.95 momentum type standard nesterov max-norm constrain 4.0 2.0 L2-norm decay 0.001 0.001\nExperiment 0.0 0.5 1.0 2.0 3.0 5.0 7.0 10.0 model 1 1.23 1.12 1.12 1.08 1.07 1.10 1.25 1.35 model 2 1.19 1.14 1.08 1.04 1.03 1.07 1.13 1.21 model 3 1.05 1.04 0.98 1.03 1.05 1.05 1.10 1.12 MNIST model 4 1.07 1.02 0.97 0.94 0.96 1.01 1.05 1.20 model 5 1.03 0.95 0.95 0.90 0.92 0.98 1.03 1.08 model 6 0.99 0.98 0.93 0.87 0.96 0.98 1.05 1.10 0.0 0.1 0.5 1.0 2.0 5.0 10.0 12.82 12.52 12.38 12.20 12.60 error rate 12.84 13.10 CIFAR-10 0.0139 0.0128 0.0104 0.0095 0.0089 0.0085 0.0077 error rate 37.22 36.75 36.25 37.01 37.18 37.58 38.01 CIFAR-100 0.0881 0.0711 0.0590 0.0529 0.0500 0.0467 0.0411"}] |
SJ6yPD5xg | [{"section_index": "0", "section_name": "REINFORCEMENT LEARNING WITH UNSUPERVISED AUXILIARY TASKS", "section_text": "Deep reinforcement learning agents have achieved state-of-the-art results by di rectly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also learns separate policies for maximising many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common rep- resentation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this rep- resentation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the pre- vious state-of-the-art on Atari, averaging 880% expert human performance, and a challenging suite of first-person, three-dimensional Labyrinth tasks leading to a mean speedup in learning of 10 and averaging 87% expert human performance on Labyrinth.\nNatural and artificial agents live in a stream of sensorimotor data. At each time step t, the agen receives observations ot and executes actions at. These actions influence the future course of the sensorimotor stream. In this paper we develop agents that learn to predict and control this stream by solving a host of reinforcement learning problems, each focusing on a distinct feature of the. sensorimotor stream. Our hypothesis is that an agent that can flexibly control its future experience. will also be able to achieve any goal with which it is presented, such as maximising its future. rewards.\nThe classic reinforcement learning paradigm focuses on the maximisation of extrinsic reward. How. ever. in many interesting domains. extrinsic rewards are only rarely observed. This raises question. of what and how to learn in their absence. Even if extrinsic rewards are frequent, the sensorimotoi. stream contains an abundance of other possible learning targets. Traditionally, unsupervised learn. ing attempts to reconstruct these targets, such as the pixels in the current or subsequent frame. I. is typically used to accelerate the acquisition of a useful representation. In contrast, our learning. objective is to predict and control features of the sensorimotor stream, by treating them as pseudo. rewards for reinforcement learning. Intuitively, this set of tasks is more closely matched with the. agent's long-term goals, potentially leading to more useful representations..\nConsider a baby that learns to maximise the cumulative amount of red that it observes. To correctly predict the optimal value, the baby must understand how to increase \"redness\"' by various means, including manipulation (bringing a red object closer to the eyes); locomotion (moving in front of a red object); and communication (crying until the parents bring a red object). These behaviours are likely to recur for many other goals that the baby may subsequently encounter. No understanding of these behaviours is required to simply reconstruct the redness of current or subsequent images..\nOur architecture uses reinforcement learning to approximate both the optimal policy and optimal value function for many different pseudo-rewards. It also makes other auxiliary predictions that serve to focus the agent on important aspects of the task. These include the long-term goal of predicting cumulative extrinsic reward as well as short-term predictions of extrinsic reward. To learn more efficiently, our agents use an experience replay mechanism to provide additional updates\nJoint first authors. Ordered alphabetically by first name\n{jaderberg, vmnih, lejlot, schaul, jzl, davidsilver, korayk}@google. com"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "O< Agent LSTM (a) Base A3C Agent Agent ConvNet T V TT V TT Aux DeConvNet Aux FC net Replay Buffer Environment Ot It 0 0 0 +1 (d) Value Function Replay t++1 t++2 Skewed sampling aux t-3 t-2 t--1 (c) Reward Prediction (b) Pixel Control\n(a) Base A3C Agent Agent ConvNet VTT VTT 7 V71 Aux DeConvNet Aux FC net Replay Buffer nvironment 0 0 (d) Value Function Replay Skewed sampling aux tT-3tT-2 t T-1 (c) Reward Prediction (b) Pixel Control igure 1: Overview of the UNREAL agent. (a) The base agent is a CNN-LSTM agent trained on-policy wi\nFigure 1: Overview of the UNREAL agent. (a) The base agent is a CNN-LSTM agent trained on-policy witl the A3C loss (Mnih et al.]2016). Observations, rewards, and actions are stored in a small replay buffer whicl encapsulates a short history of agent experience. This experience is used by auxiliary learning tasks. (b) Pixe Control - auxiliary policies Qaux are trained to maximise change in pixel intensity of different regions of the input. The agent CNN and LSTM are used for this task along with an auxiliary deconvolution network. Thi auxiliary control task requires the agent to learn how to control the environment. (c) Reward Prediction - giver three recent frames, the network must predict the reward that will be obtained in the next unobserved timestep This task network uses instances of the agent CNN, and is trained on reward biased sequences to remove th perceptual sparsity of rewards. (d) Value Function Replay - further training of the value function using th agent network is performed to promote faster value iteration. Further visualisation of the agent can be found ii https://youtu.be/Uz-zGYrYEiA"}, {"section_index": "2", "section_name": "to the critics. Just as animals dream about positively or negatively rewarding events more frequently (Olafsdottir et al.2015] [Schacter et al.]2012), our agents preferentially replay sequences containing rewarding events.", "section_text": "Importantly, both the auxiliary control and auxiliary prediction tasks share the convolutional neural network and LSTM that the base agent uses to act. By using this jointly learned representation the base agent learns to optimise extrinsic reward much faster and, in many cases, achieves better policies at the end of training..\nThis paper brings together the state-of-the-art Asynchronous Advantage Actor-Critic (A3C) frame work (Mnih et al.2016), outlined in Section [2 with auxiliary control tasks and auxiliary reward tasks, defined in sections Section|3.1|and Section|3.2|respectively. These auxiliary tasks do not re- quire any extra supervision or signals from the environment than the vanilla A3C agent. The result. is our UNsupervised REinforcement and Auxiliary Learning (UNREAL) agent (Section 3.4)\nIn Section4|we apply our UNREAL agent to a challenging set of 3D-vision based domains known. as the Labyrinth (Mnih et al.l2016), learning solely from the raw RGB pixels of a first-person view.. Our agent significantly outperforms the baseline agent using vanilla A3C, even when the baseline. was augmented with an unsupervised reconstruction loss, in terms of speed of learning, robustness to hyperparameters, and final performance. The result is an agent which on average achieves 87% of. expert human-normalised score, compared to 54% with A3C, and on average 10 faster than A3C.. Our UNREAL agent also significantly outperforms the previous state-of-the-art in the Atari domain.."}, {"section_index": "3", "section_name": "1 RELATED WORK", "section_text": "A variety of reinforcement learning architectures have focused on learning temporal abstractions, such as options (Sutton et al.||1999b), with policies that may maximise pseudo-rewards (Konidaris & Barreto[2009] Silver & Ciosek2012). The emphasis here has typically been on the development of temporal abstractions that facilitate high-level learning and planning. In contrast, our agents do\nnot make any direct use of the pseudo-reward maximising policies that they learn (although this i. an interesting direction for future research). Instead, they are used solely as auxiliary objectives fo. developing a more effective representation.\nThe Horde architecture (Sutton et al.]2011) also applied reinforcement learning to identify value. functions for a multitude of distinct pseudo-rewards. However, this architecture was not used for representation learning; instead each value function was trained separately using distinct weights..\nThe UVFA architecture (Schaul et al.]2015a) is a factored representation of a continuous set of optimal value functions, combining features of the state with an embedding of the pseudo-reward function. Initial work on UVFAs focused primarily on architectural choices and learning rules for these continuous embeddings. A pre-trained UVFA representation was successfully transferred to novel pseudo-rewards in a simple task.\nSimilarly, the successor representation (Dayan1993, Barreto et al.2016} Kulkarni et al.]2016 factors a continuous set of expected value functions for a fixed policy, by combining an expectation. over features of the state with an embedding of the pseudo-reward function. Successor representa- tions have been used to transfer representations from one pseudo-reward to another (Barreto et al.. 2016) or to different scales of reward (Kulkarni et al.]2016).\nAnother, related line of work involves learning models of the environment (Schmidhuber. 2010 Xie et al.2015] Oh et al.[2015). Although learning environment models as auxiliary tasks could improve RL agents (e.g. Lin & Mitchell (1992); Li et al.(2015)), this has not yet been shown to. work in rich visual environments.."}, {"section_index": "4", "section_name": "2 BACKGROUND", "section_text": "We assume the standard reinforcement learning setting where an agent interacts with an environment. over a number of discrete time steps. At time t the agent receives an observation ot along with a. reward rt and produces an action at. The agent's state st is a function of its experience up until time t, St = f(o1, r1, a1, .., Ot,Tt). The n-step return Rt:t+n at time t is defined as the discounted. s, V (s) = E [Rt:oo[St = s, ], when actions are selected accorded to a policy (a|s). The action- value function Q (s, a) = E [Rt:oo[St = s, at = a, ] is the expected return following action a from State s.\nPolicy gradient algorithms adjust the policy to maximise the expected reward, Es~r R1:0o, us Oe 1999a); in practice the true value functions Q\" and V\" are substituted with approximations. The Asynchronous Advantage Actor-Critic (A3C) algorithm (Mnih et al.]2016) constructs an approx imation to both the policy (a[s, 0) and the value function V(s, 0) using parameters 0. Both pol- icy and value are adjusted towards an n-step lookahead value, Rt:t+n + nV(st+n+1, 0), using an entropy regularisation penalty, LA3c ~ LvR + L - Es~ [aH((s, :, 0)], where LvR : - Es~T (Rt:t+n+ ynV(St+n+1,0-)- V(st,0)\nIn A3C many instances of the agent interact in parallel with many instances of the environment which both accelerates and stabilises learning. The A3C agent architecture we build on uses ar. LSTM to jointly approximate both policy and value function V, given the entire history of expe. rience as inputs (see Figure1(a)).\nMore recently, auxiliary predictions tasks have been studied in 3D reinforcement learning environ ments.Lample & Chaplot(2016) showed that predicting internal features of the emulator, such as the presence of an enemy on the screen, is beneficial. Mirowski et al.(2016) study auxiliary prediction of depth in the context of navigation.."}, {"section_index": "5", "section_name": "3.1 AUXILIARY CONTROL TASKS", "section_text": "The auxiliary control tasks we consider are defined as additional pseudo-reward functions in the. environment the agent is interacting with. We formally define an auxiliary control task c by a reward function r(c) : S A -> R, where S is the space of possible states and A is the space of available. actions. The underlying state space S includes both the history of observations and rewards as well. as the state of the agent itself, i.e. the activations of the hidden units of the network..\narg max ER1:+ 0 cEC\nWhile many types of auxiliary reward functions can be defined from these quantities we focus on two specific types:\nThe Figure[1(b) shows an A3C agent architecture augmented with a set of auxiliary pixel control tasks. In this case, the base policy shares both the convolutional visual stream and the LSTM with the auxiliary policies. The output of the auxiliary network head is an Nact n n tensor Qaux where Qaux(a, i,j) represents the network's current estimate of the optimal discounted expected change in cell (i, j) of the input after taking action a. We exploit the spatial nature of the auxiliary tasks by using a deconvolutional neural network to produce the auxiliary values Qaux\nIn addition to learning generally about the dynamics of the environment, an agent must learn tc maximise the global reward stream. To learn a policy to maximise rewards, an agent requires features\nIn this section we incorporate auxiliary tasks into the reinforcement learning framework in order to promote faster training, more robust learning, and ultimately higher performance for our agents. Section|3.1|introduces the use of auxiliary control tasks, Section|3.2 describes the addition of reward focussed auxiliary tasks, and Section |3.4 describes the complete UNREAL agent combining these auxiliary tasks.\nGiven a set of auxiliary control tasks C, let (c) be the agent's policy for each auxiliary task c E C and let be the agent's policy on the base task. The overall objective is to maximise total performance across all these auxiliary tasks.\n. Pixel changes - Changes in the perceptual stream often correspond to important events in an environment. We train agents that learn a separate policy for maximally changing the pixels in each cell of an n n non-overlapping grid placed over the input image. We refer to these auxiliary tasks as pixel control. See Section4|for a complete description. Network features - Since the policy or value networks of an agent learn to extract task relevant high-level features of the environment (Mnih et al.2015) Zahavy et al.]2016 Silver et al.]2016) they can be useful quantities for the agent to learn to control. Hence, the activation of any hidden unit of the agent's neural network can itself be an auxiliary reward. We train agents that learn a separate policy for maximally activating each of the units in a specific hidden layer. We refer to these tasks as feature control.\n+10 Goal +1 Apple Agent 1000\nFigure 2: The raw RGB frame from the environment is the observation that is given as input to the agent, along with the last action and reward. This observation is shown for a sample of a maze from the nav_maze_all_random_02 level in Labyrinth. The agent must navigate this unseen maze and pick up apples giving +1 reward and reach the goal giving +10 reward, after which it will respawn. Top down views of samples from this maze generator show the variety of mazes procedurally created. A video showing the agent playing Labyrinth levels can be viewed athttps : //youtu.be/Uz-zGYrYE jA\nthat recognise states that lead to high reward and value. An agent with a good representation of rewarding states, will allow the learning of good value functions, and in turn should allow the easy learning of a policy.\nHowever, in many interesting environments reward is encountered very sparsely, meaning that it. can take a long time to train feature extractors adept at recognising states which signify the onset of reward. We want to remove the perceptual sparsity of rewards and rewarding states to aid the. training of an agent, but to do so in a way which does not introduce bias to the agent's policy..\nTo do this, we introduce the auxiliary task of reward prediction - that of predicting the onset of immediate reward given some historical context. This task consists of processing a sequence of consecutive observations, and requiring the agent to predict the reward picked up in the subsequent unseen frame. This is similar to value learning focused on immediate reward (y = O).\nWe train the reward prediction task on sequences S, = (sr-k, St-k+1,.:., S-1) to predict the reward r-, and sample S- from the experience of our policy in a skewed manner so as to over represent rewarding events (presuming rewards are sparse within the environment). Specifically we sample such that zero rewards and non-zero rewards are equally represented, i.e. the predictec probability of a non-zero reward is P(r- / 0) = 0.5. The reward prediction is trained to minimise a loss LRp. In our experiments we use a multiclass cross-entropy classification loss across three classes (zero. positive. or negative reward). although a mean-squared error loss is also feasible.\nThe auxiliary reward predictions may use a different architecture to the agent's main policy. Rathe. than simply \"hanging' the auxiliary predictions off the LSTM, we use a simpler feedforward net. work that concatenates a stack of states S, after being encoded by the agent's CNN, see Figure|1|(c). The idea is to simplify the temporal aspects of the prediction task in both the future direction (focus-. ing only on immediate reward prediction rather than long-term returns) and past direction (focusing. only on immediate predecessor states rather than the complete history); the features discovered ir this manner are shared with the primary LSTM (via shared weights in the convolutional encoder) tc. enable the policy to be learned more efficiently.."}, {"section_index": "6", "section_name": "3.3 EXPERIENCE REPLAY", "section_text": "Experience replay has proven to be an effective mechanism for improving both the data efficiency. and stability of deep reinforcement learning algorithms (Mnih et al.]2015). The main idea is to store. transitions in a replay buffer, and then apply learning updates to sampled transitions from this buffer.\nExperience replay provides a natural mechanism for skewing the distribution of reward predic tion samples towards rewarding events: we simply split the replay buffer into rewarding and non rewarding subsets, and replay equally from both subsets. The skewed sampling of transitions from\nnav maze all random 02 samples\nUnlike learning a value function, which is used to estimate returns and as a baseline while learning a policy, the reward predictor is not used for anything other than shaping the features of the agent This keeps us free to bias the data distribution, therefore biasing the reward predictor and feature shaping, without biasing the value function or policy.\na replay buffer means that rare rewarding states will be oversampled, and learnt from far more fre quently than if we sampled sequences directly from the behaviour policy. This approach can be viewed as a simple form of prioritised replay (Schaul et al.,[2015b).\nIn addition to reward prediction, we also use the replay buffer to perform value function replay (see. Figure[1). This amounts to resampling recent historical sequences from the behaviour policy dis. tribution and performing extra value function regression in addition to the on-policy value function regression in A3C. By resampling previous experience, and randomly varying the temporal position. of the truncation window over which the n-step return is computed, value function replay performs. value iteration and exploits newly discovered features shaped by reward prediction. We do not skew. the distribution for this case.\nExperience replay is also used to increase the efficiency and stability of the auxiliary control tasks Q-learning updates are applied to sampled experiences that are drawn from the replay buffer, allow ing features to be developed extremely efficiently."}, {"section_index": "7", "section_name": "3.4 UNREAL AGENT", "section_text": "The UNREAL algorithm combines the benefits of two separate, state-of-the-art approaches to deep. reinforcement learning. The primary policy is trained with A3C (Mnih et al.]2016): it learns from. parallel streams of experience to gain efficiency and stability; it is updated online using policy gra-. dient methods; and it uses a recurrent neural network to encode the complete history of experience This allows the agent to learn effectively in partially observed environments..\nThe auxiliary tasks are trained on very recent sequences of experience that are stored and randomly sampled; these sequences may be prioritised (in our case according to immediate rewards) (Schaul et al.[2015b); these targets are trained off-policy by Q-learning; and they may use simpler feedfor- ward architectures. This allows the representation to be trained with maximum efficiency..\nThe UNREAL algorithm optimises a single combined loss function with respect to the joint pa rameters of the agent, 0, that combines the A3C loss A3c together with an auxiliary control loss o (C)\nLQ) + ARPLRP LUNREAL(0) = LA3C + AVRLVR + ApC\nwhere AyR. Apc.. are weighting terms on the individual loss components.\nIn practice, the loss is broken down into separate components that are computed either on-policy. directly from experience; or off-policy, on replayed transitions. Specifically, the A3C loss LA3c is. minimised on-policy; while the value function loss Lvr is optimised from replayed data, in additior to the A3C loss (of which it is one component, see Section2). The auxiliary control loss Lpc is. optimised off-policy from replayed data, by n-step Q-learning. Finally, the reward loss LRp is. optimised from rebalanced replay data.."}, {"section_index": "8", "section_name": "4 EXPERIMENTS", "section_text": "In all our experiments we used an A3C CNN-LSTM agent as our baseline and the UNREAL agen along with its ablated variants added auxiliary outputs and losses to this base agent. The agent is. trained on-policy with 20-step returns and the auxiliary tasks are performed every 20 environmen steps, corresponding to every update of the base A3C agent. The replay buffer stores the most recen. 2k observations, actions, and rewards taken by the base agent. In Labyrinth we use the same set oi. 17 discrete actions for all games and on Atari the action set is game dependent (between 3 and 18. discrete actions). The full implementation details can be found in Section|B."}, {"section_index": "9", "section_name": "4.1 LABYRINTH RESULTS", "section_text": "Labyrinth Performance Labyrinth Robustness Avg. TOP 3 agents 90% UNREAL 87%UNREAL 100% A3C+PC 80% 81% A3C+PC A3C+RP+VR 79% A3C+RP+VR A3C+RP 72% A3C+RP A3C+VR 80% A3C 60% 57% A3C+VR Pernon 54% A3C 50% 60% lised Hewnn 40% 20% 10% 0% 0.0 0.5 1.0 1.5 2.0 2.5 0% 0% 20% 40% 60% 80% 10 100% Steps Percentage of Agents in Population Atari Performance Atari Robustness Avg. TOP 3 agents 900% 880%UNREAL UNREAL 1200% A3C+RP+VR 861% A3C+RP+VR 800% A3C 853%A3C g 700% 1000% unn 600% 592% Prior. Due! Clip. DQN Prnon 800% Per 500% maaiee 400%373% DueI Clip_DQN 600% Nor Huwnn ummn 300% 400% 228% DQN 200% 200% 100% 0% 0.0 0.5 1.0 1.5 2.0 2.5 0% 0% 20% 40% 60% 80% 10 100% Steps Percentage of Agents in Population\nFigure 3: An overview of performance averaged across all levels on Labyrinth (Top) and Atari (Bottom). In the ablated versions RP is reward prediction, VR is value function replay, and PC is pixel control, with the UNREAL agent being the combination of all. Left: The mean human-normalised performance over last 100 episodes of the top-3 jobs at every point in training. In Labyrinth, we achieve an average of 87% human- normalised score, with every element of the agent improving upon the 54% human-normalised score of vanilla A3C. Prior. Duel Clip and Duel Clip are Dueling Networks with gradient clipped to 10 as reported inWang et al.. (2016) Right: The final human-normalised score of every job in our hyperparameter sweep, sorted by score. On. both Labyrinth and Atari, the UNREAL agent increases the robustness to the hyperparameters (namely learning rate and entropy cost).\nperson 3D game platforms for AI research like VizDoom (Kempka et al.|2016) or Minecraft (Tessler et al.[2016). However, in comparison, Labyrinth has considerably richer visuals and more realistic physics. Textures in Labyrinth are often dynamic (animated) so as to convey a game world where walls and floors shimmer and pulse, adding significant complexity to the perceptual task. The action space allows for fine-grained pointing in a fully 3D world. Labyrinth also supports continuous motion unlike the Minecraft platform of (Oh et al.]2016), which is a 3D grid world.\nWe evaluated agent performance on 13 Labyrinth levels that tested a range of different agent abilities A top-down visualization showing the layout of each level can be found in Figure|9|of the Appendix A gallery of example images from the first-person perspective of the agent are in Figure 10|of the Appendix. The levels can be divided into four categories:\n1. Simple fruit gathering levels with a static map (seekavoid_arena_01 an stairway_to_melon_01). The goal of these levels is to collect apples (small positiv reward) and melons (large positive reward) while avoiding lemons (small negative reward) 2. Navigation levels with a static map layout (nav_maze static_0{1,2,3} anc nav maze random goal_0{1,2,3}). These levels test the agent's ability to finc their way to a goal in a fixed maze that remains the same across episodes. The starting location is random. In this case, agents could encode the structure of the maze in networl weights. In the random goal variant, the location of the goal changes in every episode The optimal policy is to find the goal's location at the start of each episode and then use long-term knowledge of the maze layout to return to it as quickly as possible from any location. The static variant is simpler in that the goal location is always fixed for al episodes and only the agent's starting location changes so the optimal policy does no require the first step of exploring to find the current goal location. 3. Procedurally-generated navigation levels requiring effective exploration of a new maze generated on-the-fly at the start of each episode (nav maze_all_random_0{1, 2, 3}). Thes levels test the agent's ability to effectively explore a totally new environment. The optima\nWe compared the full UNREAL agent to a basic A3C LSTM agent along with several ablated versions of UNREAL with different components turned off. A video of the final agent perfor mance, as well as visualisations of the activations and auxiliary task outputs can be viewed al ht t p s : /voutu.be/Uz-zGYrYEiA\nFigure 3](top left) shows curves of mean human-normalised scores over the 13 Labyrinth levels Adding each of our proposed auxiliary tasks to an A3C agent substantially improves the perfor mance. Combining different auxiliary tasks leads to further improvements over the individual auxil iary tasks. The UNREAL agent, which combines all three auxiliary tasks, achieves more than twice the final human-normalised mean performance of A3C, increasing from 54% to 87% (45% to 92% for median performance). This includes a human-normalised score of 116% on lt_hallway slope and 100% on nav_maze_random_goal_02.\nUnsupervised Reinforcement Learning In order to better understand the benefits of auxiliary control tasks we compared it to two simple baselines on three Labyrinth levels. The first baseline was A3C augmented with a pixel reconstruction loss, which has been shown to improve perfor- mance on 3D environments (Kulkarni et al.|2016). The second baseline was A3C augmented with an input change prediction loss, which can be seen as simply predicting the immediate auxiliary reward instead of learning to control. Finally, we include preliminary results for A3C augmented with the feature control auxiliary task on one of the levels. We retuned the hyperparameters of all methods (including learning rate and the weight placed on the auxiliary loss) for each of the three Labyrinth levels. Figure 5|shows the learning curves for the top 5 hyperparameter settings on three Labyrinth navigation levels. The results show that learning to control pixel changes is indeed bet- ter than simply predicting immediate pixel changes, which in turn is better than simply learning to reconstruct the input. In fact, learning to reconstruct only led to faster initial learning and actually made the final scores worse when compared to vanilla A3C. Our hypothesis is that input reconstruc- tion hurts final performance because it puts too much focus on reconstructing irrelevant parts of the visual input instead of visual cues for rewards, which rewarding objects are rarely visible. We saw a substantial improvement from including the feature control auxiliary task, which was only slightly worse than for pixel control. Combining feature control with other auxiliary tasks is a promising future direction.\n)oncy wouid begin by expiorin IaplalyIedrn CyOCCC1I knowledge to repeatedly return to the goal as many times as possible before the end of the episode (between 60 and 300 seconds). 4. Laser-tag levels requiring agents to wield laser-like science fiction gadgets to tag bots con trolled by the game's in-built AI (lt_horse_shoe_color and lt_hallway slope). A rewar of 1 is delivered whenever the agent tags a bot by reducing its shield to O. These level approximate the default OpenArena/Quake3 gameplay mode. In lt_hallway_slope there i a sloped arena, requiring the agent to look up and down. In lt_horse_shoe_color, the color and textures of the bots are randomly generated at the start of each episode. This prevent agents from relying on color for bot detection. These levels test aspects of fine-contro (for aiming), planning (to anticipate where bots are likely to move), strategy (to contro key areas of the map such as gadget spawn points), and robustness to the substantial vi sual complexity arising from the large numbers of independently moving objects (gadge projectiles and bots).\nPerhaps of equal importance, aside from final performance on the games, UNREAL is significantly faster at learning and therefore more data efficient, achieving a mean speedup of the number of steps to reach A3C best performance of 10 (median 11) across all levels and up to 18 on nav_maze_random_goal_02. This translates in a drastic improvement in the data efficiency of UN- REAL over A3C, requiring less than 10% of the data to reach the final performance of A3C. We can also measure the robustness of our learning algorithms to hyperparameters by measuring the perfor- mance over all hyperparameters (namely learning rate and entropy cost). This is shown in Figure|3 Top Right: every auxiliary task in our agent improves robustness. A breakdown of the performance of A3C, UNREAL and UNREAL without pixel control on the individual Labyrinth levels is shown in Figure4\nAUC Performance Data Efficiency Top5 Speedup 627% It_hallway_slope 270% 10x 481% 214% 5x 187% 195% It_horse_shoe_color 4x 154% 171% 3x 471% 397% 16x nav_maze_all_random_01 251% 249% 4x 570% 577% 13x nav_maze_all_random_02 230% 259% 3x 407% 561% 12x nav_maze_all_random_03 203% 213% 3x 202% 366% 6x nav_maze_random goal_01 197% 240% 2x 1164% 1217% 18x nav_maze_random_goal_02 509% 383% 3x 715% 923% 13x UNREAL nav_maze_random_goal_03 315% 296% 3x A3C+RP+VR 388% 267% 13x nav_maze_static_01 222% 195% 4x 1345% 1797% 8x nav_maze_static_02 210% 275% 2x 141% 162% 4x nav_maze_static.03 146% 142% 2x seekavoid_arena_01 109% 102% 1x 115% 115% 5x 114% stairway_to_melon_01 223% 129% 360% Mean 495% 543% 10x 243% 239% 3x 407% Median 366% 11x 210% 240% 3x\nFigure 4: A breakdown of the improvement over A3C due to our auxiliary tasks for each level on Labyrinth The values for A3C+RP+VR (reward prediction and value function replay) and UNREAL (reward prediction value function replay and pixel control) are normalised by the A3C value. AUC Performance gives the robust ness to hyperparameters (area under the robustness curve Figure 3|Right). Data Efficiency is area under the mean learning curve for the top-5 jobs, and Top5 Speedup is the speedup for the mean of the top-5 jobs to reach the maximum top-5 mean score set by A3C. Speedup is not defined for stairway_to_melon as A3C did not learn throughout training.\nnav_maze_all_random_01 nav_maze_random_goal_01 70 nav_maze_all_random_01 70 90 60 80 60 70 50 50 60 MW 150 40 30 30 20 A3C A3C 20 20 A3C + Input reconstruction A3C+ Input reconstruction A3C L C A3C + Input change prediction A3C + Input change prediction 10 A3C + Feature Control A3C+ Pixel Control A3C+PixelControl A3C + Pixel Control 20 40 60 80 10 30 40 50 60 70 20 40 60 80 Training steps in millions Training steps in millions Training steps in millions\n70 nav maze allrandom 01 90 nav_maze_random_goal_01 70 nav_maze_all_random_01 80 60 60 70 50 60 W 40 30 30 20 20 A3C A3C 20 A3C + Input reconstruction - A3C + Input reconstruction A3C L C A3C + Input change prediction. A3C + Input change prediction. A3C+ Feature Control A3C + Pixel Control A3C + Pixel Control A3C + Pixel Control 20 40 60 80 30 40 50 60 70 20 40 60 80 Training steps in millions. Training steps in millions. Training steps in millions."}, {"section_index": "10", "section_name": "4.2 ATARI", "section_text": "We applied the UNREAL agent as well as UNREAL without pixel control to 57 Atari games fron. the Arcade Learning Environment (Bellemare et al.] 2012) domain. We use the same evaluatior protocol as for our Labyrinth experiments where we evaluate 50 different random hyper parameter settings (learning rate and entropy cost) on each game. The results are shown in the bottom row ol Figure[3] The left side shows the average performance curves of the top 3 agents for all three meth ods the right half shows sorted average human-normalised scores for each hyperparameter setting More detailed learning curves for individual levels can be found in Figure[6 We see that UNREAI surpasses the current state-of-the-art agents, i.e. A3C and Prioritized Dueling DQN (Wang et al. 2016), across all levels attaining 880% mean and 250% median performance. Notably, UNREAL is also substantially more robust to hyper parameter settings than A3C.."}, {"section_index": "11", "section_name": "CONCLUSION", "section_text": "We have shown how augmenting a deep reinforcement learning agent with auxiliary control and re. ward prediction tasks can drastically improve both data efficiency and robustness to hyperparameter settings. Most notably, our proposed UNREAL architecture more than doubled the previous state- of-the-art results on the challenging set of 3D Labyrinth levels, bringing the average scores to over. 87% of human scores. The same UNREAL architecture also significantly improved both the learning. speed and the robustness of A3C over 57 Atari games..\nFigure 5: Comparison of various forms of self-supervised learning on random maze navigation. Adding ar input reconstruction loss to the objective leads to faster learning compared to an A3C baseline. Predicting changes in the inputs works better than simple image reconstruction. Learning to control changes leads to the best results."}, {"section_index": "12", "section_name": "REFERENCES", "section_text": "Andre Barreto. Remi Munos. Tom Schaul. and David Silver. Successor features for transfer i reinforcement learning. arXiv preprint arXiv:1606.05312, 2016\nPeter Dayan. Improving generalization for temporal difference learning: The successor representa tion. Neural Computation, 5(4):613-624, 1993\nTejas D Kulkarni, Ardavan Saeedi, Simanta Gautam, and Samuel J Gershman. Deep successo. reinforcement learning. arXiv preprint arXiv:1606.02396. 2016\nXiujun Li, Lihong Li, Jianfeng Gao, Xiaodong He, Jianshu Chen, Li Deng, and Ji He. Recurrent reinforcement learning: A hybrid approach. arXiv preprint arXiv:1509.03044, 2015.\nLong-Ji Lin and Tom M Mitchell. Memory approaches to reinforcement learning in non-markoviar. domains. Technical report, Carnegie Mellon University. School of Computer Science. 1992\nPiotr Mirowski, Razvan Pascanu, Fabio Viola, Andrea Banino, Hubert Soyer, Andy Ballard, Misha. Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu, Dharshan Kumaran, and Raia Hadsell Learning to navigate in complex environments. 2016.\nVolodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Pe. tersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daar Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement. learning. Nature, 518(7540):529-533, 02 2015. URL http://dx.doi.0rg/10.1038/\nWe thank Charles Beattie, Julian Schrittwieser, Marcus Wainwright, and Stig Petersen for environ ment design and development, and Amir Sadik and Sarah York for expert human game testing. We also thank Joseph Modayil, Andrea Banino, Hubert Soyer, Razvan Pascanu, and Raia Hadsell for many helpful discussions.\nVolodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wier stra, and Martin Riedmiller. Playing atari with deep reinforcement learning. In NIPs Deep Learn. ing Workshop. 2013.\nJunhyuk Oh, Valliappa Chockalingam, Satinder Singh, and Honglak Lee. Control of memory, active perception, and action in minecraft. arXiv preprint arXiv:1605.09128, 2016.\nJing Peng and Ronald J Williams. Incremental multi-step q-learning. Machine Learning, 22(1-3) 283-290, 1996.\nDaniel L Schacter, Donna Rose Addis, Demis Hassabis, Victoria C Martin, R Nathan Spreng, and Karl K Szpunar. The future of memory: remembering, imagining, and the brain. Neuron, 76(4):. 677-694, 2012.\nJurgen Schmidhuber. Formal theory of creativity, fun, and intrinsic motivation (1990-2010). IEEE Transactions on Autonomous Mental Development, 2(3):230-247, 2010.\nDavid Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche. Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering. the game of go with deep neural networks and tree search. Nature. 529(7587):484-489. 2016\nRichard S Sutton. David A McAllester, Satinder P Singh, Yishay Mansour, et al. Policy gradien methods for reinforcement learning with function approximation. In NIPS, volume 99, pp. 1057 1063, 1999a.\nRichard S Sutton, Doina Precup, and Satinder Singh. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artificial intelligence, 1999b.\nRichard S Sutton, Joseph Modayil, Michael Delp, Thomas Degris, Patrick M Pilarski, Adam White. and Doina Precup. Horde: A scalable real-time architecture for learning knowledge from unsuper- vised sensorimotor interaction. In The 1Oth International Conference on Autonomous Agents and Multiagent Systems-Volume 2, pp. 761-768. International Foundation for Autonomous Agents and Multiagent Systems, 2011.\nChen Tessler, Shahar Givony, Tom Zahavy, Daniel J Mankowitz, and Shie Mannor. A deep hierar chical approach to lifelong learning in minecraft. arXiv preprint arXiv:1604.07255, 2016\nWang, N. de Freitas, and M. Lanctot. Dueling Network Architectures for Deep Reinforcemen Learning. In Proceedings of the 33rd International Conference on Machine Learning (ICML) 2016. hristopher John Cornish Hellaby Watkins. Learning from delayed rewards. PhD thesis, University\nTom Zahavy, Nir Ben Zrihem, and Shie Mannor. Graying the black box: Understanding dqns. In Proceedings of the 33rd International Conference on Machine Learning, 2016\nADD montezuma seaquest chopper_command 10000 UNREAL UNREAL UNREAL A3C+RP+VR A3C+RP+VR A3C+RP+VR 250 A3C A3C A3C 1000 1000 000 0.0 0.5 1.0 1.5 2.0 2.5 0.0 0.5 1.0 1.5 2.0 2.5 0.5 1.0 1.5 2.0 2.5\nFigure 6: Learning curves for three example Atari games. Semi-transparent lines are agents with different seeds and hyperparameters, the bold line is a mean over population and dotted line is the best agent (in terms of final performance)."}, {"section_index": "13", "section_name": "B IMPLEMENTATION DETAILS", "section_text": "The input to the agent at each timestep was an 84 84 RGB image. All agents processed the input. with the convolutional neural network (CNN) originally used for Atari byMnih et al.(2013). The. network consists of two convolutional layers. The first one has 16 8 8 filters applied with stride 4. while the second one has 32 4 4 filters with stride 2. This is followed by a fully connected layer. with 256 units. All three layers are followed by a ReLU non-linearity. All agents used an LSTM. with forget gates (Gers et al.]2000) with 256 cells which take in the CNN-encoded observation. concatenated with the previous action taken and current reward. The policy and value function are linear projections of the LSTM output. The agent is trained with 20-step unrolls. The action space. of the agent in the environment is game dependent for Atari (between 3 and 18 discrete actions), and. 17 discrete actions for Labyrinth. Labyrinth runs at 60 frames-per-second. We use an action repeat. of four, meaning that each action is repeated four times, with the agent receiving the final fourth. frame as input to the next processing step..\nFor the pixel control auxiliary tasks we trained policies to control the central 80 80 crop of the inputs. The cropped region was subdivided into a 20 20 grid of non-overlapping 4 4 cells. The instantaneous reward in each cell was defined as the average absolute difference from the previous frame, where the average is taken over both pixels and channels in the cell. The output tensor of auxiliary values, Qaux, is produced from the LSTM outputs by a deconvolutional network. The LSTM outputs are first mapped to a 32 7 7 spatial feature map with a linear layer followed by a ReLU. This is followed by a doconvolutional layer of 32 3 3 filters and a ReLU, resulting ir a 32 9 9 feature map. Deconvolution layers with 1 and Nact filters of size 4 4 and stride 2 map the 32 9 9 into a value tensor and an advantage tensor respectively. The spatial map is then decoded into Q-values using the dueling parametrization (Wang et al.|2016) producing the Nact 20 20 output Qaux. There is a final ReLU nonlinearity on the Qaux output.\nThe architecture for feature control was similar. We learned to control the second hidden layer which is a spatial feature map with size 32 9 9. Similarly to pixel control, we exploit the spatia structure in the data and used a deconvolutional network to produce Qaux from the LSTM outputs.\nThe reward prediction task is performed on a sequence of three observations, which are fed through. three instances of the agent's CNN. The three encoded CNN outputs are concatenated and fed through a fully connected layer of 128 units with ReLU activations, followed by a final linear three- class classifier and softmax. The reward is predicted as one of three classes: positive, negative, or zero and trained with a task weight ARp = 1. The value function replay is performed on a sequence of length 20 with a task weight vR = 1.\nThe auxiliary tasks are performed every 20 environment steps, corresponding to every update of the base A3C agent, once the replay buffer has filled with agent experience. The replay buffer stores the most recent 2k observations, actions, and rewards taken by the base agent.\nThe agents are optimised over 32 asynchronous threads with shared RMSprop (Mnih et al.]2016) The learning rates are sampled from a log-uniform distribution between O.o001 and O.005. The. entropy costs are sampled from the log-uniform distribution between O.0005 and O.01. Task weight. Apc is sampled from log-uniform distribution between O.01 and O.1 for Labyrinth and O.0001 and 0.01 for Atari (since Atari games are not homogeneous in terms of pixel intensities changes, thus. we need to fit this normalization factor)..\nGAMES seaquest 10000 chopper_comr UNREAL UNREAL UNREAL A3C+RP+VR A3C+RP+VR A3C+RP+VR A3C A3C 1000 500 1.5 0.5 Steps Steps Steps 10\nEach agent was trained with 45 randomly sampled values of hyperparameters. Each of them also starts with a different random seed (however, due to asynchronous nature of A3C this does not determinise the learning procedure).\nPrevious sections showed that the UNREAL agent is more robust to the choice of hyperparameters than A3C. To present an even clearer picture of this effect, we show learning curves averaged over all hyperparamters/seeds used in the experiments in Figure[7] It is worth noting, that standard error for such curves is not increased despite adding our auxiliary tasks..\n60% Avg. of all agents 70% Avg. of all agents 53%UNREAL 50% 60% Prrrrnmne 45% A3C+PC 53% UNREAL 41% A3C+RP+VR 40% 38% A3C+RP 41% A3C+RP+VR 29% A3C+VR 23%A3C 23% A3C 10% 10% 0% 0% 2.5 0.0 0.5 1.0 1.5 2.0 2.5 0.0 0.5 1.0 1.5 2.0 Steps 109 Steps\nFigure 7: Learning curves averaged across all hyperparameters (left), and the same curves for thre types of agent plotted with standard error (right).\nWe also include scatter plots of averaged final human normalised performance with respect to th two main hyperparameters (learning rate and entropy cost) in Figure 8l The final performanc across all levels varies rather smoothly across similar hyperparameters, showing that learning is no significantly affected by random seeds. The only significant inconsistency, which can be spotte. around (-3, -3) point in UNREAL plot is an effect of the third hyperparamer - Apc, which differs. a lot between these runs.\nA3C A3C+RP+VR UNREAL 2.0 2.0 0.9 0.9 0.8 0.8 0.8 . 2.2 2.2 2.2 0.7 0.7 0.7 2.4 2.4 0.6 2.4 . 0.6 2.6 2.6 0.5 2.6 . 0.5 ... ).4 ). 0.4 2.8 2.8 . . 0.3 0.3 3.0 .. 3.0 3.0 .. 0.2 0.2 0.2 3.2 . 3.2 . 0.1 3.2 0.1 3.4 3.4 0.0 3.4 4.0 3.0 2.5 2.( 4.0 3.5 3.0 2.5 4.5 4.0 3.5 3.0 2.5 log10(lr) log10(lr) log10(r) 2.(\nA3C A3C+RP+VR UNREAL 2.0 2.0 2.2 te 2.2 2.2 0.7 (mu) 2.4 2.4 2.4 J.6 2.6 2. 2.6 0.5 2.8 2.8 2.8 0.4 0.3 0.3 3.0 3.0 3.0 J.2 ).2 3.2 . 3.2 3.2 0. 3445 2.5 4.0 3.5 3.0 4.0 3.0 2.5 log10(lr) log10(lr) log10lr)\nFigure 8: Human normalised performance for each hyperparameter setting with respect to the main hyperparameters of A3C - learning rate and entropy cost.\n60% 70% 53% UNREAL 60% 50% 45% A3C+PC lCe 53% UNREAL 50% 41% A3C+RP+VR 38% A3C+RP 41% A3C+RP+VR 29% A3C+VR 23% A3C 23% A3C 10% 10% 0% 0% 0.0 0.5 1.0 1.5 2.0 2.5 0.0 0.5 1.0 1.5 2.0 2.5 Steps 10 Steps 10"}, {"section_index": "14", "section_name": "D RAW ATARI SCORES", "section_text": "Random UNREAL (random starts) Human Raw Raw Normalised Raw alien 228 2087 30% 6371 amidar 6 4463 290% 1540 assault 222 16853 4091% 629 210 154818 2110% asterix 7536 719 248289 692% asteroids 36517 atlantis 12850 990904 7126% 26575 bank heist 1353 212% 14 644 battle_zone 2360 147700 474% 33030 beam_rider 364 39250 266% 14961 berzerk 124 41489 1957% 2238 bowling 23 58 29% 146 boxing 0 94 980% 10 breakout 2 751 2861% 28 centipede 2091 4612 31% 10322 811 75028 chopper_command 914% 8930 crazy_climber 10780 129674 543% 32667 defender 2874 417812 3633% 14296 demon_attack 152 106937 3245% 3443 double_dunk -19 21 943% -14 enduro 0 0 0% 740 fishing_derby -92 42 138% 5 freeway 0 34 133% 26 frostbite 65 3795 90% 4203 gopher 258 54007 2618% 2311 gravitar 173 6310 209% 3116 hero 1027 37291 146% 25839 ice hockey -11 16 233% 0 jamesbond 29 69872 20572% 368 kangaroo 52 14838 550% 2739 krull 1598 10587 1759% 2109 kung_fu_master 258 76676 372% 20787 montezuma_revenge 0 2902 69% 4182 ms_pacman 307 5423 34% 15375 2292 name_this_game 12602 229% 6796 phoenix 761 404280 6811% 6686 pitfall -229 0 4% 5999 -21 8 79% pong 16 25 546 private_eye 1% 64169 qbert 164 26437 220% 12085 riverraid 1338 19077 136% 14382 road_runner 12 52596 766% 6878 robotank 2 79 1136% 9 seaquest 68 5305 13% 40426 skiing -17098 -8988 60% -3687 solaris 1236 2895 17% 11033 space_invaders 148 25851 1952% 1465 664 72864 815% 9528 star gunner -10 10 128% surround 5 tennis -24 -0 136% -7 time_pilot 3568 89559 4130% 5650 tutankham 11 294 222% 138 533 339119 3616% 9896 up_n_down venture 0 0 0% 1039 video_pinball 0 518567 3315% 15641 wizard_of_wor 564 35344 871% 4556 yars_revenge 3093 42889 90% 47135 zaxxon 32 60044 714% 8443 Mean 1453% - - 1 Median 331% - - 1\nTable 1: Raw scores of the best UNREAL agent (selected according to the learning curve) for all. Atari games considered. Scores are averaged over 200 runs with random starts. Normalised score of s is (s - Srando -Srandom.\nLABYRINTH LEVELS +10 Melon Agent -1 Lemon Agent +1 Apple +1 Apple .. -1 Lemon. stairway_to_melon seekavoid_arena_01 Agent +1 Apple +10 Goal Agent +1 Apple +10 Goal nav_maze * 01 nav_maze * 02 ..Power-up. +10 Goal +1 Apple \" Bots Agent .Agent nav_maze * 03 1t_horse_shoe_color. .Bots Agent Power-ups 1t_hallway _slope.\nFigure 9: Top-down renderings of each Labyrinth level. The nav_maze * _0{1, 2, 3} levels show one example maze layout. In the all_random case, a new maze was randomly generated at the star of each episode.\nstairway_to_melon 100 seekavoid_arena_01 94 122 nav_maze*01 nav_maze*02 nav maze*03 1t horse shoe color lt_hallway_slope.\nFigure 10: Example images from the agent's egocentric viewpoint for each Labyrinth level"}] |
S1OufnIlx | [{"section_index": "0", "section_name": "ADVERSARIAL EXAMPLES IN THE PHYSICAL WORL", "section_text": "Alexey Kurakin\nIan J. Goodfellow\nkurakin@google.com\nian@openai.com\nMost existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been mod-. ified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be. used to perform an attack on machine learning systems, even if the adversary has. no access to the underlying model. Up to now, all previous work has assumed a threat model in which the adversary can feed data directly into the machine learn- ing classifier. This is not always the case for systems operating in the physical. world, for example those which are using signals from cameras and other sensors as input. This paper shows that even in such physical world scenarios, machine. learning systems are vulnerable to adversarial examples. We demonstrate this by. feeding adversarial images obtained from a cell-phone camera to an ImageNet In- ception classifier and measuring the classification accuracy of the system. We find that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera.\nwasher: 0.5398173 safe: 0.34602574 safe: 0.3719305 washer: 0.22088042 loudspeaker: 0.24184975 (a) Image from dataset (b) Clean image (c) Adv. image, e = 4 (d) Adv. image, e = 8\nFigure 1: Demonstration of a black box attack (in which the attack is constructed without access to. the model) on a phone app for image classification using physical adversarial examples. We took. a clean image from the dataset (a) and used it to generate adversarial images with various sizes of. adversarial perturbation e. Then we printed clean and adversarial images and used the TensorFlow Camera Demo app to classify them. A clean image (b) is recognized correctly as a \"washer' when. perceived through the camera, while adversarial images (c) and (d) are misclassified. See video of.\nSamy Bengio\nGoogle Brain\nbengio@google.com"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "However, machine learning models are often vulnerable to adversarial manipulation of their input in. tended to cause incorrect classification (Dalvi et al.|2004). In particular, neural networks and many other categories of machine learning models are highly vulnerable to attacks based on small modifi- cations of the input to the model at test time (Biggio et al.] 2013] Szegedy et al.]2014] Goodfellow et al.]2014]Papernot et al.2016b)\nThe problem can be summarized as follows. Let's say there is a machine learning system M and. input sample C which we call a clean example. Let's assume that sample C is correctly classified by. the machine learning system, i.e. M(C) = ytrue. It's possible to construct an adversarial example. A which is perceptually indistinguishable from C but is classified incorrectly, i.e. M(A) ytrue.. These adversarial examples are misclassified far more often than examples that have been perturbed by noise, even if the magnitude of the noise is much larger than the magnitude of the adversarial perturbation (Szegedy et al.2014).\nAdversarial examples pose potential security threats for practical machine learning applications. In particular, Szegedy et al.[(2014) showed that an adversarial example that was designed to be. misclassified by a model Mj is often also misclassified by a model M2. This adversarial example transferability property means that it is possible to generate adversarial examples and perform a mis-. classification attack on a machine learning system without access to the underlying model. Papernot. et al.[(2016a) and Papernot et al.[(2016b) demonstrated such attacks in realistic scenarios.\nSuch a threat model can describe some scenarios in which attacks can take place entirely within a. computer, such as as evading spam filters or malware detectors (Biggio et al.2013] [Nelson et al. However, many practical machine learning systems operate in the physical world. Possible exam. ples include but are not limited to: robots perceiving world through cameras and other sensors, videc. surveillance systems, and mobile applications for image or sound classification. In such scenarios. the adversary cannot rely on the ability of fine-grained per-pixel modifications of the input data. Th following question thus arises: is it still possible to craft adversarial examples and perform adver. sarial attacks on machine learning systems which are operating in the physical world and perceiving. data through various sensors, rather than digital representation?.\nSome prior work has addressed the problem of physical attacks against machine learning systems. but not in the context of fooling neural networks by making very small perturbations of the input For example, Carlini et al.[(2016) demonstrate an attack that can create audio inputs that mobile phones recognize as containing intelligible voice commands, but that humans hear as an unintelli- gible voice. Face recognition systems based on photos are vulnerable to replay attacks, in which a previously captured image of an authorized user's face is presented to the camera instead of an actual face (Smith et al.|[2015). Adversarial examples could in principle be applied in either of these phys- ical domains. An adversarial example for the voice command domain would consist of a recording that seems to be innocuous to a human observer (such as a song) but contains voice commands rec- ognized by a machine learning algorithm. An adversarial example for the face recognition domain might consist of very subtle markings applied to a person's face, so that a human observer would recognize their identity correctly, but a machine learning system would recognize them as being a different person. The most similar work to this paper is|Sharif et al.(2016), which appeared publicly after our work but had been submitted to a conference earlier. Sharif et al.[(2016) also print images of adversarial examples on paper and demonstrated that the printed images fool image recognition systems when photographed. The main differences between their work and ours are that: (1) we use a cheap closed-form attack for most of our experiments, while Sharif et al.(2016) use a more expensive attack based on an optimization algorithm, (2) we make no particular effort to modify our adversarial examples to improve their chances of surviving the printing and photography process We simply make the scientific observation that very many adversarial examples do survive this pro- cess without any intervention. Sharif et al.(2016) introduce extra features to make their attacks work\nHowever all prior work on adversarial examples for neural networks made use of a threat model in which the attacker can supply input directly to the machine learning model. Prior to this work,. it was not known whether adversarial examples would remain misclassified if the examples were. constructed in the physical world and observed through a camera..\nas best as possible for practical attacks against face recognition systems. (3)|Sharif et al.(2016) are restricted in the number of pixels they can modify (only those on the glasses frames) but can modif those pixels by a large amount; we are restricted in the amount we can modify a pixel but are free to modify all of them.\nTo investigate the extent to which adversarial examples survive in the physical world, we conducte an experiment with a pre-trained ImageNet Inception classifier (Szegedy et al.||2015). We generate adversarial examples for this model, then we fed these examples to the classifier through a cell phone camera and measured the classification accuracy. This scenario is a simple physical worl system which perceives data through a camera and then runs image classification. We found th a large fraction of adversarial examples generated for the original model remain misclassified eve when perceived through a camera\nSurprisingly, our attack methodology required no modification to account for the presence of the. camera-the simplest possible attack of using adversarial examples crafted for the Inception model resulted in adversarial examples that successfully transferred to the union of the camera and Incep. tion. Our results thus provide a lower bound on the attack success rate that could be achieved with. more specialized attacks that explicitly model the camera while crafting the adversarial example..\nIn the remaining of the paper we use the following notation.\nDne limitation of our results is that we have assumed a threat model under which the attacker has ull knowledge of the model architecture and parameter values. This is primarily so that we can. se a single Inception v3 model in all experiments, without having to devise and train a different. igh-performing model. The adversarial example transfer property implies that our results could be. xtended trivially to the scenario where the attacker does not have access to the model description. Szegedy et al.[[2014f|Goodfellow et al.[[2014f[Papernot et al.[[2016b). While we haven't run detailed xperiments to study transferability of physical adversarial examples we were able to build a simple. hone application to demonstrate potential adversarial black box attack in the physical world, see. ig.1\nTo better understand how the non-trivial image transformations caused by the camera affect adver sarial example transferability, we conducted a series of additional experiments where we studied. how adyersarial examples transfer across several specific kinds of synthetic image transformations..\nThe rest of the paper is structured as follows: In Section2] we review different methods which we used to generate adversarial examples. This is followed in Section 3by details about our \"physical world\" experimental set-up and results. Finally, Section4 describes our experiments with various artificial image transformations (like changing brightness, contrast, etc...) and how they affect ad- versarial examples.\nThis section describes different methods to generate adversarial examples which we have used in the experiments. It is important to note that none of the described methods guarantees that generated image will be misclassified. Nevertheless we call all of the generated images \"adversarial images\"\nX - an image, which is typically 3-D tensor (width height depth). In this paper, we assume that the values of the pixels are integer numbers in the range [0, 255]. Ytrue - true class for the image X. J(X, y) - cross-entropy cost function of the neural network, given image X and class. y. We intentionally omit network weights (and other parameters) 0 in the cost func. tion because we assume they are fixed (to the value resulting from training the machine learning model) in the context of the paper. For neural networks with a softmax output. layer, the cross-entropy cost function applied to integer class labels equals the negative\n1 Dileep George noticed that another kind of adversarially constructed input, designed to have no true class yet be categorized as belonging to a specific class, fooled convolutional networks when photographed in a less systematic experiments. As of August 19, 2016 it was mentioned in Figure 6 at http: //www.\nwhere X(x, y, z) is the value of channel z of the image X at coordinates (x, y"}, {"section_index": "2", "section_name": "2.1 FAST METHOD", "section_text": "One of the simplest methods to generate adversarial images, described in (Goodfellow et al. 2014) is motivated by linearizing the cost function and solving for the perturbation that maximizes the cost subject to an Loo constraint. This may be accomplished in closed form, for the cost of one call to back-propagation:\nXadv = X +esign(VxJ(X,ytrue)\nIn this paper we refer to this method as \"fast'' because it does not require an iterative procedure tc compute adversarial examples, and thus is much faster than other considered methods..\nWe introduce a straightforward way to extend the \"fast' method-we apply it multiple times with small step size, and clip pixel values of intermediate results after each step to ensure that they are ir an e-neighbourhood of the original image.\nxgdv +asign(VxJ(Xgdv,Ytrue) = Clipx,e\nBelow we refer to this method as \"basic iterative'' method\nBoth methods we have described so far simply try to increase the cost of the correct class, withoi. specifying which of the incorrect classes the model should select. Such methods are sufficient fo. application to datasets such as MNIST and CIFAR-10, where the number of classes is small and a. classes are highly distinct from each other. On ImageNet, with a much larger number of classes an the varying degrees of significance in the difference between classes, these methods can result i. uninteresting misclassifications, such as mistaking one breed of sled dog for another breed of sle. dog. In order to create more interesting mistakes, we introduce the iterative least-likely class metho.. This iterative method tries to make an adversarial image which will be classified as a specific desire. target class. For desired class we chose the least-likely class according to the prediction of the traine. network on image X :\nyLL = arg min{p(y[X) y\nFor a well-trained classifier, the least-likely class is usually highly dissimilar from the true class, s. this attack method results in more interesting mistakes, such as mistaking a dog for an airplane\nTo make an adversarial image which is classified as yLL we maximize logp(yrl[X) by mak ing iterative steps in the direction of sign{x logp(yrr[X)}. This last expression equals sign{-x J(X, yLL)) for neural networks with cross-entropy loss. Thus we have the following procedure:\nlog-probability of the true class given the image: J(X, y) = - logp(y|X), this relation ship will be used below. Clipx,e {X'} - function which performs per-pixel clipping of the image X', so the result. will be in Loo e-neighbourhood of the source image X. The exact clipping equation is as follows:\nClipx,c{X'} (x,y,z) = min{255, X(x,y,z)+e,max{0, X(x,y,z)-e,X'(x,y,z)\nIn our experiments we used a = 1, i.e. we changed the value of each pixel only by 1 on each step We selected the number of iterations to be min(e + 4, 1.25e). This amount of iterations was chosen heuristically; it is sufficient for the adversarial example to reach the edge of the e max-norm ball but. restricted enough to keep the computational cost of experiments manageable.\nXadv = X, Xa+1 = Clipx,c{ -asign(VxJ(Xadu,yLL)) aao {Xadv\nBelow we refer to this method as the \"least likely class' method or shortly \"1.1. class\"\n1.0 1.0 clean images clean images fast adv. fast adv. 0.8 0.8 basic iter. adv. basic iter. adv. Ieast likely class adv. toanune t-doe toanune g-doe Ieast likely class adv. 0.6 0.6 0.4 0.4 0.2 0.2 0.0 0.0 0 16 32 48 64 80 96 112 128 0 16 32 48 64 80 96 112 128 epsilon epsilon\nFigure 2: Top-1 and top-5 accuracy of Inception v3 under attack by different adversarial methods and different e compared to \"clean images\" _ unmodified images from the dataset. The accuracy was computed on all 50, 000 validation images from the ImageNet dataset. In these experiments e varies from 2 to 128.\nThe experiments were performed on all 50, 000 validation samples from the ImageNet dataset (Rus-. sakovsky et al.[2014) using a pre-trained Inception v3 classifier (Szegedy et al.]2015). For each validation image, we generated adversarial examples using different methods and different values. of e. For each pair of method and e, we computed the classification accuracy on all 50, 000 images. Also, we computed the accuracy on all clean images, which we used as a baseline..\nTop-1 and top-5 classification accuracy on clean and adversarial images for various adversarial methods are summarized in Figure2 Examples of generated adversarial images could be found. in Appendix in Figures|5|and4\nAs shown in Figure[2] the fast method decreases top-1 accuracy by a factor of two and top-5 accuracy by about 40% even with the smallest values of e. As we increase e, accuracy on adversarial images generated by the fast method stays on approximately the same level until e = 32 and then slowly decreases to almost 0 as e grows to 128. This could be explained by the fact that the fast method adds e-scaled noise to each image, thus higher values of e essentially destroys the content of the image and makes it unrecognisable even by humans, see Figure[5]\nOn the other hand iterative methods exploit much finer perturbations which do not destroy the image even with higher e and at the same time confuse the classifier with higher rate. The basic iterative method is able to produce better adversarial images when e < 48, however as we increase e it i unable to improve. The \"least likely class\"' method destroys the correct classification of most images even when e is relatively small.\nWe limit all further experiments to e < 16 because such perturbations are only perceived as a small noise (if perceived at all), and adversarial methods are able to produce a significant number of misclassified examples in this e-neighbourhood of clean images.\nFor this iterative procedure we used the same a and same number of iterations as for the basic iterative method.\nAs mentioned above, it is not guaranteed that an adversarial image will actually be misclassified sometimes the attacker wins, and sometimes the machine learning model wins. We did an exper- imental comparison of adversarial methods to understand the actual classification accuracy on the. generated images as well as the types of perturbations exploited by each of the methods..\nTo study the influence of arbitrary transformations on adversarial images we introduce the notior of destruction rate. It can be described as the fraction of adversarial images which are no longe misclassified after the transformations. The formal definition is the following:\nXk Xk C(T(Xk do d Xk adv,Ytrue\nWe denote the binary negation of this indicator value as C(X, y), which is computed as C(X, y) 1 - C(X, y).\nORO (a) Printout (b) Photo of printout (c) Cropped image\nFigure 3: Experimental setup: (a) generated printout which contains pairs of clean and adversar ial images, as well as QR codes to help automatic cropping; (b) photo of the printout made by a cellphone camera; (c) automatically cropped image from the photo.\nTo explore the possibility of physical adversarial examples we ran a series of experiments with. photos of adversarial examples. We printed clean and adversarial images, took photos of the printed pages, and cropped the printed images from the photos of the full page. We can think of this as a. black box transformation that we refer to as \"photo transformation\"..\nWe computed the accuracy on clean and adversarial images before and after the photo transformation. as well as the destruction rate of adversarial images subjected to photo transformation.\nThe experimental procedure was as follows:\nwhere n is the number of images used to comput the destruction rate, Xk is an image from the. function T(o) is an arbitrary image transformation--in this article, we study a variety of transfor- mations, including printing the image and taking a photo of the result. The function C(X, y) is an indicator function which returns whether the image was classified correctly:.\nif image X is classified as y; C(X,y) Otherwise.\n1. Print the image, see Figure[3a] In order to reduce the amount of manual work, we printed multiple pairs of clean and adversarial examples on each sheet of paper. Also, QR codes were put into corners of the printout to facilitate automatic cropping.. (a) All generated pictures of printouts (Figure[3a) were saved in lossless PNG format.. (b) Batches of PNG printouts were converted to multi-page PDF file using the con- vert tool from the ImageMagick suite with the default settings: convert *.png. Output.pdf\nThis procedure involves manually taking photos of the printed pages, without careful control of. lighting, camera angle, distance to the page, etc. This is intentional; it introduces nuisance variability. that has the potential to destroy adversarial perturbations that depend on subtle, fine co-adaptation of exact pixel values. That being said, we did not intentionally seek out extreme camera angles. or lighting conditions. All photos were taken in normal indoor lighting with the camera pointed. approximately straight at the page.\nFor each combination of adversarial example generation method and e we conducted two sets oi experiments:\nResults of the photo transformation experiment are summarized in Tables[12|and|3\nWe found that \"fast' adversarial images are more robust to photo transformation compared to itera tive methods. This could be explained by the fact that iterative methods exploit more subtle kind o perturbations, and these subtle perturbations are more likely to be destroyed by photo transforma tion.\nOne unexpected result is that in some cases the adversarial destruction rate in the \"prefiltered case' was higher compared to the \"average case\"'. In the case of the iterative methods, even the total\n(c) Generated PDF files were printed using a Ricoh MP C5503 office printer. Each page. of PDF file was automatically scaled to fit the entire sheet of paper using the default . printer scaling. The printer resolution was set to 600dpi.. 2. Take a photo of the printed image using a cell phone camera (Nexus 5x), see Figure|3b. 3. Automatically crop and warp validation examples from the photo, so they would become. squares of the same size as source images, see Figure|3c. (a) Detect values and locations of four QR codes in the corners of the photo. The QR. codes encode which batch of validation examples is shown on the photo. If detection of any of the corners failed, the entire photo was discarded and images from the photo. were not used to calculate accuracy. We observed that no more than 10% of all images. were discarded in any experiment and typically the number of discarded images was. about 3% to 6%. (b) Warp photo using perspective transform to move location of QR codes into pre-defined. coordinates. (c) After the image was warped, each example has known coordinates and can easily be cropped from the image. 4. Run classification on transformed and source images. Compute accuracy and destruction. rate of adversarial images\nAverage case. To measure the average case performance, we randomly selected 102 images. to use in one experiment with a given e and adversarial method. This experiment estimates. how often an adversary would succeed on randomly chosen photos-the world chooses an. image randomly, and the adversary attempts to cause it to be misclassified.. Prefiltered case. To study a more aggressive attack, we performed experiments in which. the images are prefiltered. Specifically, we selected 102 images such that all clean images. are classified correctly, and all adversarial images (before photo transformation) are clas-. sified incorrectly (both top-1 and top-5 classification). In addition we used a confidence. threshold for the top prediction: p(ypredicted|X) 0.8, where ypredicted is the class pre-. dicted by the network for image X. This experiment measures how often an adversary. would succeed when the adversary can choose the original image to attack. Under our. threat model, the adversary has access to the model parameters and architecture, so the attacker can always run inference to determine whether an attack will succeed in the ab-. sence of photo transformation. The attacker might expect to do the best by choosing to. make attacks that succeed in this initial condition. The victim then takes a new photo of the. physical object that the attacker chooses to display, and the photo transformation can either. preserve the attack or destroy it.\nTable 1: Accuracy on photos of adversarial images in the averag ge case (randomly chosen images).\nTable 2: Accuracy on photos of adversarial images in the prefiltered case (clean image correctl classified, adversarial image confidently incorrectly classified in digital form being being printec. and photographed ).\nTable 3: Adversarial image destruction rate with photos\nPhotos Source images Adversarial Clean images Adv. imagess Clean images Adv. images method top-1 top-5 top-1 top-5 top-1 top-5 top-1 top-5 fast e = 16 79.8% 91.9% 36.4% 67.7% 85.3% 94.1% 36.3% 58.8% fast e = 8 70.6% 93.1% 49.0% 73.5% 77.5% 97.1% 30.4% 57.8% fast e = 4 72.5% 90.2% 52.9% 79.4% 77.5% 94.1% 33.3% 51.0% fast e = 2 65.7% 85.9% 54.5% 78.8% 71.6% 93.1% 35.3% 53.9% iter. basic e = 16 72.9% 89.6% 49.0% 75.0% 81.4% 95.1% 28.4% 31.4% iter. basic e = 8 72.5% 93.1% 51.0% 87.3% 73.5% 93.1% 26.5% 31.4% iter. basic e = 4 63.7% 87.3% 48.0% 80.4% 74.5% 92.2% 12.7% 24.5% iter. basic e = 2 70.7% 87.9% 62.6% 86.9% 74.5% 96.1% 28.4% 41.2% 1.1. class e = 16 71.1% 90.0% 60.0% 83.3% 79.4% 96.1% 1.0% 1.0% 1.1. class e = 8 76.5% 94.1% 69.6% 92.2% 78.4% 98.0% 0.0% 6.9% 1.1. class e = 4 76.8% 86.9% 75.8% 85.9% 80.4% 90.2% 9.8% 24.5% 1.1. class e = 2 71.6% 87.3% 68.6% 89.2% 75.5% 92.2% 20.6% 44.1%\nPhotos Source images Adversarial Clean images. Adv. images. Clean images. Adv. images. method top-1 top-5 top-1 top-5 top-1 top-5 top-1 top-5 fast e = 16 81.8% 97.0% 5.1% 39.4% 100.0% 100.0% 0.0% 0.0% fast e = 8 77.1% 95.8% 14.6% 70.8% 100.0% 100.0% 0.0% 0.0% fast e = 4 81.4% 100.0% 32.4% 91.2% 100.0% 100.0% 0.0% 0.0% fast e = 2 88.9% 99.0% 49.5% 91.9% 100.0% 100.0% 0.0% 0.0% iter. basic e = 16 93.3% 97.8% 60.0% 87.8% 100.0% 100.0% 0.0% 0.0% iter. basic e = 8 89.2% 98.0% 64.7% 91.2% 100.0% 100.0% 0.0% 0.0% iter. basic e = 4 92.2% 97.1% 77.5% 94.1% 100.0% 100.0% 0.0% 0.0% iter. basic e = 2 93.9% 97.0% 80.8% 97.0% 100.0% 100.0% 0.0% 1.0% 1.1. class e = 16 95.8% 100.0% 87.5% 97.9% 100.0% 100.0% 0.0% 0.0% 1.1. class e = 8 96.0% 100.0% 88.9% 97.0% 100.0% 100.0% 0.0% 0.0% 1.1. class e = 4 93.9% 100.0% 91.9% 98.0% 100.0% 100.0% 0.0% 0.0% 1.1. class e = 2 92.2% 99.0% 93.1% 98.0% 100.0% 100.0% 0.0% 0.0%\nAdversarial Average case Prefiltered case method top-1 top-5 top-1 top-5 fast e = 16 12.5% 40.0% 5.1% 39.4% fast e = 8 33.3% 40.0% 14.6% 70.8% fast e = 4 46.7% 65.9% 32.4% 91.2% fast e = 2 61.1% 63.2% 49.5% 91.9% iter. basic e = 16 40.4% 69.4% 60.0% 87.8% iter. basic e = 8 52.1% 90.5% 64.7% 91.2% iter. basic e = 4 52.4% 82.6% 77.5% 94.1% iter. basic e = 2 71.7% 81.5% 80.8% 96.9% 1.1. class e = 16 72.2% 85.1% 87.5% 97.9% 1.1. class e = 8 86.3% 94.6% 88.9% 97.0% 1.1. class e = 4 90.3% 93.9% 91.9% 98.0% 1.1. class e = 2 82.1% 93.9% 93.1% 98.0%\nsuccess rate was lower for prefiltered images rather than randomly selected images. This suggest hat, to obtain very high confidence, iterative methods often make subtle co-adaptations that are no able to survive photo transformation.\nOverall, the results show that some fraction of adversarial examples stays misclassified even after a. non-trivial transformation: the photo transformation. This demonstrates the possibility of physica adversarial examples. For example, an adversary using the fast method with e = 16 could expec. that about 2/3 of the images would be top-1 misclassified and about 1/3 of the images would be. top-5 misclassified. Thus by generating enough adversarial images, the adversary could expect tc. cause far more misclassification than would occur on natural inputs..\nThe experiments described above study physical adversarial examples under the assumption that. adversary has full access to the model (i.e. the adversary knows the architecture, model weights, etc. ...). However, the black box scenario, in which the attacker does not have access to the model, is. a more realistic model of many security threats. Because adversarial examples often transfer from one model to another, they may be used for black box attacks Szegedy et al.(2014); Papernot et al.. (2016a). As our own black box attack, we demonstrated that our physical adversarial examples fool a. different model than the one that was used to construct them. Specifically, we showed that they fool. the open source TensorFlow camera demo2 - an app for mobile phones which performs image. classification on-device. We showed several printed clean and adversarial images to this app and. observed change of classification from true label to incorrect label. Video with the demo available. at https://youtu.be/zQ_uMenoBCk We also demonstrated this effect live at GeekPwn. 2016.\nDetailed results for various transformations and adversarial methods with e = 16 could be found in Appendix in Figure[6 The following general observations can be drawn from these experiments:\n2 As of October 25, 2016 TensorFlow camera demo was available at https://github.com/ tensorflow/tensorflow/tree/master/tensorflow/examples/android\nThe transformations applied to images by the process of printing them, photographing them, and. cropping them could be considered as some combination of much simpler image transformations.. Thus to better understand what is going on we conducted a series of experiments to measure the. adversarial destruction rate on artificial image transformations. We explored the following set of transformations: change of contrast and brightness, Gaussian blur, Gaussian noise, and JPEG en-. coding.\nFor this set of experiments we used a subset of 1, 000 images randomly selected from the validation set. This subset of 1, O00 images was selected once, thus all experiments from this section used the same subset of images. We performed experiments for multiple pairs of adversarial method and transformation. For each given pair of transformation and adversarial method we computed adversarial examples, applied the transformation to the adversarial examples, and then computed the destruction rate according to Equation (1).\nAdversarial examples generated by the fast method are the most robust to transformations. and adversarial examples generated by the iterative least-likely class method are the least. robust. This coincides with our results on photo transformation.. The top-5 destruction rate is typically higher than top-1 destruction rate. This can be ex-. plained by the fact that in order to \"destroy\" top-5 adversarial examples, a transformation. has to push the correct class labels into one of the top-5 predictions. However in order to. destroy top-1 adversarial examples we have to push the correct label to be top-1 prediction,. which is a strictly stronger requirement.. Changing brightness and contrast does not affect adversarial examples much. The destruc-. tion rate on fast and basic iterative adversarial examples is less than 5%, and for the iterative least-likely class method it is less than 20%.."}, {"section_index": "3", "section_name": "5 CONCLUSION", "section_text": "In this paper we explored the possibility of creating adversarial examples for machine learning sys tems which operate in the physical world. We used images taken from a cell-phone camera as ar input to an Inception v3 image classification neural network. We showed that in such a set-up, a sig nificant fraction of adversarial images crafted using the original network are misclassified even wher fed to the classifier through the camera. This finding demonstrates the possibility of adversarial ex amples for machine learning systems in the physical world. In future work, we expect that it wil be possible to demonstrate attacks using other kinds of physical objects besides images printed or paper, attacks against different kinds of machine learning systems, such as sophisticated reinforce ment learning agents, attacks performed without access to the model's parameters and architecture (presumably using the transfer property), and physical attacks that achieve a higher success rate by explicitly modeling the phyiscal transformation during the adversarial example construction process We also hope that future work will develop effective methods for defending against such attacks"}, {"section_index": "4", "section_name": "REFERENCES", "section_text": "Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Srndic, Pavel Laskov, Gior- gio Giacinto, and Fabio Roli. Evasion attacks against machine learning at test time. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 387- 402. Springer, 2013.\nNilesh Dalvi, Pedro Domingos, Sumit Sanghai, Deepak Verma, et al. Adversarial classification. I Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery an data mining, pp. 99-108. ACM, 2004.\nIan J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. CoRR,abs/1412.6572,2014. URL http://arxiv.org/abs/1412.6572\nGeoffrey Hinton, Li Deng, Dong Yu, George Dahl, Abdel rahman Mohamed, Navdeep Jaitly, An drew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara Sainath, and Brian Kingsbury. Deey neural networks for acoustic modeling in speech recognition. Signal Processing Magazine, 2012\nBlur. noise and JPEG encoding have a higher destruction rate than changes of brightness and contrast. In particular, the destruction rate for iterative methods could reach 80% - 90%. However none of these transformations destroy 100% of adversarial examples, which coincides with the \"photo transformation'' experiment.\nNicolas Papernot, Patrick Drew McDaniel, Ian J. Goodfellow, Somesh Jha, Z. Berkay Celik, and Ananthram Swami. Practical black-box attacks against deep learning systems using adversarial examples. CoRR, abs/1602.02697, 2016a. URLhttp://arxiv.0rg/abs/1602.02697"}, {"section_index": "5", "section_name": "Appendix", "section_text": "Appendix contains following figures\n'Basic iter.\"': Lo distance to clean image = 32\nFigure 4: Comparison of different adversarial methods with e = 32. Perturbations generated by. iterative methods are finer compared to the fast method. Also iterative methods do not always select a point on the border of e-neighbourhood as an adversarial image..\n1. Figure4|with examples of adversarial images produced by different adversarial methods 2. Figure[5|with examples of adversarial images for various values of e. 3. Figure|6lcontain plots of adversarial destruction rates for various image transformations\nClean image 'Fast\"; L. distance to clean image = 32\n'L.1. class\": Loo distance to clean image = 28\nclean image e = 4 e =8 e = 16 e = 24 e = 32 e = 48 e = 64 clean image e =4 e =8 e = 16 e = 24 e = 32 e = 48 e = 64\nFigure 5: Comparison of images resulting from an adversarial pertubation using the \"fast\"' method with different size of perturbation e. The top image is a \"washer\"' while the bottom one is a \"ham- ster'. In both cases clean images are classified correctly and adversarial images are misclassified for all considered e.\n18.0% 20.0% 16.0% 14.0% 15.0% 12.0% 10.0% 10.0% 8.0% 6.0% 4.0% 5.0% 2.0% 0.0% 0.0% -30 -20 10 0 10 20 30 0.7 0.8 0.9 1.0 1.1 1.2 brightness +X contrast *X (a) Change of brightness. (b) Change of contrast. 100.0% 100.0% 80.0% 80.0% rate 60.0% 60.0% 40.0% 40.0% 20.0% 20.0% 0.0% 0.0% 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 5 10 15 20 Gaussian blur . Gaussian noise . (c) Gaussian blur (d) Gaussian noise. 100.0% 80.0% feee errnrtte 60.0% fast adv., top-1. fast adv., top-5. 40.0% basic iter. adv., top-1. -- basic iter. adv., top-5. 20.0% least likely class adv., top-1 least likely class adv., top-5 0.0% 10 20 30 40 50 60 70 80 90 100 Jpeg quality (e) JPEG encoding\nFigure 6: Comparison of adversarial destruction rates for various adversarial methods and types of transformations. All experiments were done with e = 16"}] |
rJxdQ3jeg | [{"section_index": "0", "section_name": "END-TO-END OPTIMIZED IMAGE COMPRESSION", "section_text": "Johannes Balle\nCenter for Neural Science New York University New York. NY 10003. USA\nWe describe an image compression method, consisting of a nonlinear analysis transformation, a uniform quantizer, and a nonlinear synthesis transformation. The transforms are constructed in three successive stages of convolutional linear filters and nonlinear activation functions. Unlike most convolutional neural net works, the joint nonlinearity is chosen to implement a form of local gain control inspired by those used to model biological neurons. Using a variant of stochastic gradient descent, we jointly optimize the entire model for rate-distortion perfor- mance over a database of training images, introducing a continuous proxy for the discontinuous loss function arising from the quantizer. Under certain conditions the relaxed loss function may be interpreted as the log likelihood of a genera tive model, as implemented by a variational autoencoder. Unlike these models however, the compression model must operate at any given point along the rate- distortion curve, as specified by a trade-off parameter. Across an independent set of test images, we find that the optimized method generally exhibits better rate-distortion performance than the standard JPEG and JPEG 2000 compression methods. More importantly, we observe a dramatic improvement in visual quality for all images at all bit rates, which is supported by objective quality estimates using MS-SSIM."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Data compression is a fundamental and well-studied problem in engineering, and is commonly formulated with the goal of designing codes for a given discrete data ensemble with minimal en tropy (Shannon, 1948). The solution relies heavily on knowledge of the probabilistic structure of the data, and thus the problem is closely related to probabilistic source modeling. However, since al practical codes must have finite entropy, continuous-valued data (such as vectors of image pixel in tensities) must be quantized to a finite set of discrete values, which introduces error. In this context known as the lossy compression problem, one must trade off two competing costs: the entropy o the discretized representation (rate) and the error arising from the quantization (distortion). Differ ent compression applications, such as data storage or transmission over limited-capacity channels demand different rate-distortion trade-offs.\nJoint optimization of rate and distortion is difficult. Without further constraints, the general problem of optimal quantization in high-dimensional spaces is intractable (Gersho and Gray,1992). For this. reason, most existing image compression methods operate by linearly transforming the data vector into a suitable continuous-valued representation, quantizing its elements independently, and then encoding the resulting discrete representation using a lossless entropy code (Wintz, 1972) Netravali and Limb,1980). This scheme is called transform coding due to the central role of the transforma-\n* JB and EPS are supported by the Howard Hughes Medical Institute\nValero Laparra\nImage Processing Laboratory Universitat de Valencia 46980 Paterna, Spain"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "tion. For example, JPEG uses a discrete cosine transform on blocks of pixels, and JPEG 2000 uses a multi-scale orthogonal wavelet decomposition. Typically, the three components of transform coding methods - transform, quantizer, and entropy code - are separately optimized (often through manual. parameter adjustment).\nWe have developed a framework for end-to-end optimization of an image compression model based on nonlinear transforms (figure|1). Previously, we demonstrated that a model consisting of linear- nonlinear block transformations, optimized for a measure of perceptual distortion, exhibited visually superior performance compared to a model optimized for mean squared error (MsE) (Balle, La parra, and Simoncelli,2016). Here, we optimize for MSE, but use a more flexible transforms built from cascades of linear convolutions and nonlinearities. Specifically, we use a generalized divisive normalization (GDN) joint nonlinearity that is inspired by models of neurons in biological visual systems, and has proven effective in Gaussianizing image densities (Balle, Laparra, and Simoncelli 2015). This cascaded transformation is followed by uniform scalar quantization (i.e., each element is rounded to the nearest integer), which effectively implements a parametric form of vector quan tization on the original image space. The compressed image is reconstructed from these quantized values using an approximate parametric nonlinear inverse transform.\nFor any desired point along the rate-distortion curve, the parameters of both analysis and synthesis. transforms are jointly optimized using stochastic gradient descent. To achieve this in the presence. of quantization (which produces zero gradients almost everywhere), we use a proxy loss function. based on a continuous relaxation of the probability model, replacing the quantization step with. additive uniform noise. The relaxed rate-distortion optimization problem bears some resemblance to those used to fit generative image models, and in particular variational autoencoders (Kingma and. Welling,2014} Rezende, Mohamed, and Wierstra,2014), but differs in the constraints we impose to. ensure that it approximates the discrete problem all along the rate-distortion curve. Finally, rather. than reporting differential or discrete entropy estimates, we implement an entropy code and report. performance using actual bit rates, thus demonstrating the feasibility of our solution as a complete. lossy compression method.\nMost compression methods are based on orthogonal linear transforms, chosen to reduce correlations in the data, and thus to simplify entropy coding. But the joint statistics of linear filter responses exhibit strong higher order dependencies. These may be significantly reduced through the use of joint local nonlinear gain control operations (Schwartz and Simoncelli, 2001, Lyu, 2010] Sinz and Bethge,2013), inspired by models of visual neurons (Heeger, 1992] Carandini and Heeger,2012) Cascaded versions of such models have been used to capture multiple stages of visual transformation\nx y Jp 9a q 1 x y 9p 9s data perceptual code D R space space space\nFigure 1: General nonlinear transform coding framework (Balle, Laparra, and Simoncelli,2016). A. vector of image intensities x E RN is mapped to a latent code space via a parametric analysis trans- form, y = ga(x; $). This representation is quantized, yielding a discrete-valued vector q E ZM which is then compressed. The rate of this discrete code, R, is lower-bounded by the entropy of. the discrete probability distribution of the quantized vector, H[Pg]. To reconstruct the compressed mage, the discrete elements of q are reinterpreted as a continuous-valued vector y, which is trans. ormed back to the data space using a parametric synthesis transform x = gs(y; 0). Distortion is. assessed by transforming to a perceptual space using a (fixed) transform, = gp(x), and evaluating a metric d(z, 2). We optimize the parameter vectors and 0 for a weighted sum of the rate and. distortion measures, R + XD, over a set of images.\n(Simoncelli and Heeger, 1998, Mante, Bonin, and Carandini, 2008). Some earlier results suggest. that incorporating local normalization in linear block transform coding methods can improve cod-. ing performance (Malo et al.,2006), and can improve object recognition performance of cascaded. convolutional neural networks (Jarrett et al.,2009). However, the normalization parameters in these cases were not optimized for the task. Here, we make use of a generalized divisive normalization (GDN) transform with optimized parameters, that we have previously shown to be highly efficient. in Gaussianizing the local joint statistics of natural images, much more so than cascades of linear transforms followed by pointwise nonlinearities (Balle, Laparra, and Simoncelli, 2015)..\n(m,n)=) hk,i +Ck,i\n(Skm, skn)\nwhere sk is the downsampling factor for stage k. Each stage concludes with a GDN operation\n3k,i+ Yk.ii\nThe full set of h, c, , and y parameters (across all three stages) constitute the parameter vector d to be optimized.\n(m, n) as the input to the\nwhich is followed by upsampling\nm/Sk,n(Sk) if m/Sk and n/Sk are integer. otherwise,\nwhere sk is the upsampling factor for stage k. Finally, this is followed by an affine convolution\nm,n)= hkij. +Ck,i\nNote that some training algorithms for deep convolutional networks incorporate \"batch normaliza-. tion', rescaling the responses of linear filters in the network so as to keep it in a reasonable operating. range (Ioffe and Szegedy, 2015). This type of normalization is different from local gain control in that the rescaling factor is identical across all spatial locations. Moreover, once the training is completed, the scaling parameters are typically fixed, which turns the normalization into an affine. transformation with respect to the data -- unlike GDN, which is spatially adaptive and can be highly. nonlinear.\nSpecifically, our analysis transform ga consists of three stages of convolution, subsampling, and livisive normalization. We represent the ith input channel of the kth stage at spatial location (m, n)\nAnalogously, the synthesis transform gs consists of three stages, with the order of operations re- versed within each stage, downsampling replaced by upsampling, and GDN replaced by an approx- consists of the IGDN operation:\nAnalogously, the set of h, c, , and make up the parameter vector 0. Note that the down /upsampling operations can be implemented jointly with their adjacent convolution, improving com putational efficiency.\nFigure 2: Left: The rate-distortion trade-off. The gray region represents the set of all rate-distortion values that can be achieved (over all possible parameter settings). Optimal performance for a given choice of X corresponds to a point on the convex hull of this set with slope -1/X. Right: One dimensional illustration of relationship between densities of yi (elements of code space), yi (quan tized elements), and yi (elements perturbed by uniform noise). Each discrete probability in Py. equals the probability mass of the density Py, within the corresponding quantization bin (indicated by shading). The density Py, provides a continuous function that interpolates the discrete probability values py. at integer positions.\nIn previous work, we used a perceptual transform gp, separately optimized to mimic human judge. ments of grayscale image distortions (Laparra et al.,2016), and showed that a set of one-stage trans- forms optimized for this distortion measure led to visually improved results (Balle, Laparra, and Simoncelli,2016). Here, we set the perceptual transform gp to the identity, and use mean squared. error (MSE) as the metric (i.e., d(z, ) = z - ?). This allows a more interpretable comparison to existing methods, which are generally optimized for MSE, and also allows optimization for color images, for which we do not currently have a reliable perceptual metric..\nOur objective is to minimize a weighted sum of the rate and distortion, R + XD, over the parameters of the analysis and synthesis transforms and the entropy code, where governs the trade-off between the two terms (figure 2] left panel). Rather than attempting optimal quantization directly in the image space, which is intractable due to the high dimensionality, we instead assume a fixed unifor scalar quantizer in the code space, and aim to have the nonlinear transformations warp the space ir an appropriate way, effectively implementing a parametric form of vector quantization (figure 1) The actual rates achieved by a properly designed entropy code are only slightly larger than the entropy (Rissanen and Langdon, 1981), and thus we define the objective functional directly in terms of entropy: 1\nn+ Pa Pyi(t) dt, for all n E Z\nNote that both terms in (7) depend on the quantized values, and the derivatives of the quantization function (8) are zero almost everywhere, rendering gradient descent ineffective. To allow optimiza tion via stochastic gradient descent, we replace the quantizer with an additive i.i.d. uniform noise\ncompression Pyi model X =1 generative compression Pyi models model )->8 Pyi X=2 R + D = const X const -1 R 0 1\nL[ga,gs,Pq] =-E|log2 Pq+ XE d(z,2)]\nwhere both expectations will be approximated by averages over a training set of images. Given a. powerful enough set of transformations, we can assume without loss of generality that the quan tization bin size is always one and the representing values are at the centers of the bins. That is,.\nyi = qi = round(yi)\nwhere index i runs over all elements of the vectors, including channels and spatial locations. The marginal density of yi is then given by a train of discrete probability masses (Dirac delta functions, figure[2] right panel) with weights equal to the probability mass function of qi:\nYi ~U(Yi,1) with y = ga(x; $) x ~ N(gs(y;0),(2X)-11) generative model inference model\nFigure 3: Representation of the relaxed rate-distortion optimization problem as the encoder and decoder graphs of a variational autoencoder. Nodes represent random variables, and gray shading indicates observed data; small filled nodes represent parameters; arrows indicate dependency; and nodes within boxes are per-image\nsource y, which has the same width as the quantization bins (one). This relaxed formulation has two desirable properties. First, the density function of y = y + y is a continuous relaxation of the probability mass function of q (figure2 right panel):\npy(n) = Pg(n), for all n E ZM\nwhich implies that the differential entropy of y can be used as an approximation of the entropy of q. Second, independent uniform noise approximates quantization error in terms of its margina moments, and is frequently used as a model of quantization error (Gray and Neuhoff, 1998). We car thus use the same approximation for our measure of distortion. We examine the empirical quality of these rate and distortion approximations in section4\nWe assume independent marginals in the code space for both the relaxed probability model of y and the entropy code, and model the marginals py, non-parametrically to reduce model error. Specifically, we use finely sampled piecewise linear functions which we update similarly to one dimensional histograms (see appendix). Since Py. = Py, * U(0, 1) is effectively smoothed by a box-car filter - the uniform density on the unit interval, l(0, 1) the model error can be made arbitrarily small by decreasing the sampling interval.\nGiven this continuous approximation of the quantized coefficient distribution, the loss function for parameters 0 and can be written as:\nL(0,$) =Ex,y log2 Py;(9a(x;$) + y;y(i) (gs(ga(x;$) + y;0)),gp\nwhere vector yh(i) parameterizes the piecewise linear approximation of pu, (trained jointly with 0 and g). This is continuous and differentiable, and thus well-suited for stochastic optimization.\nWe derived our formulation directly from the classical rate-distortion optimization problem. How. ever, once the transition to a continuous loss function is made, the optimization problem resemble those encountered in fitting generative models of images, and can more specifically be cast in th. context of variational autoencoders (Kingma and Welling,2014] Rezende, Mohamed, and Wierstra. 2014). In Bayesian variational inference, we are given an ensemble of observations of a randon. variable x along with a generative model px|y(x|y). We seek to find a posterior py|x(y|x), whic. generally cannot be expressed in closed form. The approach followed by Kingma and Wellin (2014) consists of approximating this posterior with a density q(y|x), by minimizing the Kullback. Leibler divergence between the two:.\nDkL[q|Py|x] = Ey~qlog q(y|x) - Ey~q log Py|x(y|x) = Ey~q log q(y|x) - Ey~q log Px|y(x|y) - Ey~q logPy(y) + const\nThis objective function is equivalent to our relaxed rate-distortion optimization problem, with dis tortion measured as MSE, if we define the generative model as follows (figure3):\nPx|g(x|y;,0) =N(x;gs(y;0),(2X)-11) .= IPg.(yi;y(i) Oy(y: 2\nwhere U(yi; yi,1) is the uniform density on the unit interval centered on yi. With this, the first. term in the Kullback-Leibler divergence is constant; the second term corresponds to the distortion,. and the third term corresponds to the rate (both up to additive constants). Note that if a perceptual transform gp is used, or the metric d is not Euclidean, Px|y is no longer Gaussian, and equivalence. to variational autoencoders cannot be guaranteed, since the distortion term may not correspond to a. normalizable density. For any affine and invertible perceptual transform and any translation-invariant. metric, it can be shown to correspond to the density\nwhere Z(X) normalizes the density (but need not be computed to fit the model)\nDespite the similarity between our nonlinear transform coding framework and that of variational. autoencoders, it is worth noting several fundamental differences. First, variational autoencoders are. continuous-valued, and digital compression operates in the discrete domain. Comparing differentia. entropy with (discrete) entropy, or entropy with an actual bit rate, can potentially lead to misleading. results. In this paper, we use the continous domain strictly for optimization, and perform the evalu-. ation on actual bit rates, which allows comparison to existing image coding methods. We assess the. quality of the rate and distortion approximations empirically.\nSecond, generative models aim to minimize differential entropy of the data ensemble under the model, i.e., explaining fluctuations in the data. This often means minimizing the variance of a. 'slack\"' term like (13), which in turn maximizes X. Transform coding methods, on the other hand. are optimized to achieve the best trade-off between having the model explain the data (which in. creases rate and decreases distortion), and having the slack term explain the data (which decreases rate and increases distortion). The overall performance of a compression model is determined by the shape of the convex hull of attainable model distortions and rates, over all possible values ol he model parameters. Finding this convex hull is equivalent to optimizing the model for particula. values of X (see figure|2). In contrast, generative models operate in a regime where is inferred anc ideally approaches infinity for noiseless data, which corresponds to the regime of lossless compres. sion. Even so, lossless compression methods still need to operate in a discretized space, typically. directly on quantized luminance values. For generative models, the discretization of luminance val. ues is usually considered a nuisance (Theis, van den Oord, and Bethge, 2015), although there are. examples of generative models that operate on quantized pixel values (van den Oord, Kalchbrenner. and Kavukcuoglu,2016)\nFinally, although correspondence between the typical slack term (13) of a generative model (figure|3 left panel) and the distortion metric in rate-distortion optimization holds for simple metrics (e.g. Euclidean distance), a more general perceptual measure would be considered a peculiar choice fron. a generative modeling perspective, if it corresponds to a density at all..\nWe jointly optimized the full set of parameters $, 0, and all y over a subset of the ImageNe database (Deng et al.,2009) consisting of 6507 images using stochastic descent. This optimization was performed separately for each , yielding separate transforms and marginal probability models for each value.\nq(y|x;$) =]]U(yi;Yi,1) with y = ga(x; $) 2\n1 Px|y(x[y;,0) = (9p(gs(y;0)),gp(x) exp z()\n[nnn] nrnnu wnnnuun on nnp nnnee 2000 1.2 1.0 1500 0.8 1000 0.6 0.4 500 0.2 0 0.0 0 500 1000 1500 2000 0.0 0.2 0.4 0.6 0.8 1.0 1.2 error due to quantization [MsE] discrete entropy [bit/px]\nFigure 4: Scatter plots comparing discrete vs. continuously-relaxed values of the two terms of the objective function, evaluated for the optimized GDN model. Points correspond to different values. of X between 32 and 2048 (inclusive), for images drawn from a random subset of 2169 images (one. third) from the training set. Left: distortion term, evaluated for gs(y) vs. gs(y). Right: rate term.. H[Pq. ] vs. h[py.] (summed over i).\nFor the grayscale analysis transform, we used 128 filters (size 9 9) in the first stage, each sub sampled by a factor of 4 vertically and horizontally. The remaining two stages retain the number o. channels, but use filters operating across all input channels (5 5 128), with outputs subsamplec. by a factor of 2 in each dimension. The net output thus has half the dimensionality of the input. The synthesis transform is structured analogously. For RGB images, we trained a separate set of models with the first stage augmented to operate across three (color) input channels. For the two larges values of X, and for RGB models, we increased the network capacity by increasing the number o. channels in each stage to 256 and 192, respectively. Further details about the parameterization o. the transforms and their training can be found in the appendix..\nWe first verified that the continuously-relaxed loss function given in section |3|provides a good ap proximation to the actual rate-distortion values obtained with quantization (figure4). The relaxe distortion term appears to be mostly unbiased, and exhibits a relatively small variance. The relaxe (differential) entropy provides a somewhat positively biased estimate of the discrete entropy for th coarser quantization regime, but the bias disappears for finer quantization, as expected. Note tha since the values of X do not have any intrinsic meaning, but serve only to map out the convex hull o optimal points in the rate-distortion plane (figure[2] left panel), a constant bias in either of the term would simply alter the effective value of X, with no effect on the compression performance.\nDownloaded fromhttp://www.cipr.rpi.edu/resource/stills/kodak.html\nWe compare the rate-distortion performance of our method to two standard methods: JPEG and JPEG 2000. For our method, all images were compressed using uniform quantization (the contin- uous relaxation using additive noise was used only for training purposes). To make the compar- isons more fair, we implemented a simple entropy code based on the context-based adaptive binary arithmetic coding framework (CABAC; Marpe, Schwarz, and Wiegand,2003). All sideband in- formation needed by the decoder (size of images, value of X, etc.) was included in the bit stream (see appendix). Note that although the computational costs for training our models are quite high. encoding or decoding an image with the trained models is efficient, requiring only execution of the optimized analysis transformation and quantizer, or the synthesis transformation, respectively Evaluations were performed on the Kodak image dataset' an uncompressed set of images com- monly used to evaluate image compression methods. We also examined a set of relatively standard if outdated) images used by the compression community (known by the names \"Lena\", \"Barbara\", Peppers\", and \"Mandrill') as well as a set of our own digital photographs. None of these test images was included in the training set. All test images, compressed at a variety of bit rates us- ing all three methods, along with their associated rate-distortion curves, are available online at\nFigure 5: A heavily compressed example image, 752 376 pixels. Note the appearance of artifacts especially near edges, in both the JPEG and JPEG2000 images.\nJPEG, 4283 bytes (0.121 bit/px), PSNR: luma 24.85 dB/chroma 29.23 dB, MS-SSIM: 0.8079\nProposed method, 3986 bytes (0.113 bit/px), PSNR: luma 27.01 dB/chroma 34.16 dB, MS-SSIM: 0.9039\nJPEG 2000, 4004 bytes (0.113 bit/px), PSNR: luma 26.61 dB/chroma 33.88 dB, MS-SSIM: 0.8860\nFigure 6: Cropped portion of an image compressed at three different bit rates. Middle row: the proposed method, at three different settings of X. Top row: JPEG, with three different quality settings. Bottom row: JPEG 2000, with three different rate settings. Bit rates within each column are matched.\n1.00 36 34 0.95 WISS [8p] 32 S-SW 0.90 dNSd 30 28 0.85 ewnj wnj 26 JPEG JPEG 0.80 JPEG 2000 24 JPEG 2000 proposed proposed 0.75 22 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 bit rate [bit/px] bit rate [bit/px]\nFigure 7: Rate-distortion curves for the luma component of image shown in figure 5 Left: per ceptual quality, measured with multi-scale structural similarity (MS-SSIM; Wang, Simoncelli, and Bovik (2003)). Right: peak signal-to-noise ratio (10 log1o(2552/MSE)).\nAlthough we used MSE as a distortion metric for training, the appearance of compressed images is both qualitatively different and substantially improved, compared to JPEG and JPEG 2000. As an example, figure|5|shows an image compressed using our method optimized for a low value of (anc thus, a low bit rate), compared to JPEG/JPEG 2000 images compressed at equal or greater bit rates The image compressed with our method has less detail than the original (not shown, but available online), with fine texture and other patterns often eliminated altogether, but this is accomplished ir a way that preserves the smoothness of contours and sharpness of many of the edges, giving them a natural appearance. By comparison, the JPEG and JPEG 2000 images exhibit artifacts that are com mon to all linear transform coding methods: since local features (edges, contours, texture elements\netc.) are represented using particular combinations of localized linear basis functions, independen. scalar quantization of the transform coefficients causes imbalances in these combinations, and leads to visually disturbing blocking, aliasing, and ringing artifacts that reflect the underlying basis func. tionS.\nRemarkably, we find that the perceptual advantages of our method hold for all images tested, and at. all bit rates. The progression from high to low bit rates is shown for an example image in figure|6 (additional examples provided in appendix and online). As bit rate is reduced, JPEG and JPEG 2000 degrade their approximation of the original image by coarsening the precision of the coefficients of linear basis functions, thus exposing the visual appearance of those basis functions. On the other. hand, our method appears to progressively simplify contours and other image features, effectively concealing the underlying quantization of the representation. Consistent with the appearance of. these example images, we find that distortion measured with a perceptual metric (MS-SSIM; Wang. Simoncelli, and Bovik, 2003), indicates substantial improvements across all tested images and bit rates (figure[7] additional examples provided in the appendix and online). Finally, when quantified with PSNR, we find that our method exhibits better rate-distortion performance than both JPEG and. JPEG 2000 for most (but not all) test images, especially at the lower bit rates.."}, {"section_index": "3", "section_name": "5 DISCUSSION", "section_text": "We have presented a complete image compression method based on nonlinear transform coding, anc. a framework to optimize it end-to-end for rate-distortion performance. Our compression method of fers improvements in rate-distortion performance over JPEG and JPEG 2000 for most images anc. bit rates. More remarkably, although the method was optimized using mean squared error as a dis. tortion metric, the compressed images are much more natural in appearance than those compressec. with JPEG or JPEG 2000, both of which suffer from the severe artifacts commonly seen in linea transform coding methods. Consistent with this, perceptual quality (as estimated with the MS-SSIM index) exhibits substantial improvement across all test images and bit rates. We believe this visua improvement arises because the cascade of biologically-inspired nonlinear transformations in the model have been optimized to capture the features and attributes of images that are represented ir the statistics of the data, parallel to the processes of evolution and development that are believec to have shaped visual representations within the human brain (Simoncelli and Olshausen, 2001) Nevertheless, additional visual improvements might be possible if the method were optimized using. a perceptual metric in place of MSE (Balle, Laparra, and Simoncelli,2016)..\nFor comparison to linear transform coding methods, we can interpret our analysis transform as a. single-stage linear transform followed by a complex vector quantizer. As in many other optimizec. representations - e.g., sparse coding (Lewicki and Olshausen, 1998) - as well as many engineered. representations - e.g., the steerable pyramid (Simoncelli, Freeman, et al.,1992), curvelets (Candes. and Donoho, 2002), and dual-tree complex wavelets (Selesnick, Baraniuk, and Kingsbury, 2005. - the filters in this first stage are localized and oriented and the representation is overcomplete. Whereas most transform coding methods use complete (often orthogonal) linear transforms with. spatially separable filters, the overcompleteness and orientation tuning of our initial transform may explain the ability of the model to better represent features and contours with continuously varying. orientation, position and scale (Simoncelli, Freeman, et al.,1992)..\nOur work is related to two previous publications that optimize image representations with the goal of image compression. Gregor, Besse, et al. (2016) introduce an interesting hierarchical representa tion of images, in which degradations are more natural looking than those of linear representations However, rather than optimizing directly for rate-distortion performance, their modeling is genera- tive. Due to the differences between these approaches (as outlined in section3.1), their procedure of obtaining coding representations from the generative model (scalar quantization, and elimination of hierarchical levels of the representation) is less systematic than our approach and unlikely to be optimal. Further, no entropy code is provided, and the authors therefore resort to comparing entropy estimates to bit rates of established compression methods, which can be unreliable. The model developed by Toderici et al. (2016) is optimized to provide various rate-distortion trade-offs and directly output a binary representation, making it more easily comparable to other image compres- sion methods. Moreover, their formulation has the advantage over ours that a single representation is sought for all rate points. However, it is not clear whether their formulation necessarily leads to rate-distortion optimality (and their empirical results suggest that this is not the case).\nWe are currently testing models that use simpler rectified-linear or sigmoidal nonlinearities, to de termine how much of the performance and visual quality of our results is due to use of biologically inspired joint nonlinearities. Preliminary results indicate that qualitatively similar results are achiev able with other activation functions we tested, but that rectified linear units generally require a sub stantially larger number of model parameters/stages to achieve the same rate-distortion performanc as the GDN/IGDN nonlinearities. This suggests that GDN/IGDN transforms are more efficient fo compression, producing better models with fewer stages of processing (as we previously found fo density estimation; Balle, Laparra, and Simoncelli,2015), which might be an advantage for de ployment of our method, say, in embedded systems. However, such conclusions are based on a somewhat limited set of experiments and should at this point be considered provisional. More gen erally, GDN represents a multivariate generalization of a particular type of sigmoidal function. A such, the observed efficiency advantage relative to pointwise nonlinearities is expected, and a varian of a universal function approximation theorem (e.g., Leshno et al.,[1993) should hold.\nThe rate-distortion objective can be seen as a particular instantiation of the general unsupervised learning or density estimation problems. Since the transformation to a discrete representation may be viewed as a form of classification, it is worth considering whether our framework offers any insights that might be transferred to more specific supervised learning problems, such as objec recognition. For example, the additive noise used in the objective function as a relaxation of quan tization might also serve the purpose of making supervised classification networks more robust tc small perturbations, and thus allow them to avoid catastrophic \"adversarial' failures that have beer demonstrated in previous work (Szegedy et al., 2013). In any case, our results provide a strong example of the power of end-to-end optimization in achieving a new solution to a classical problem"}, {"section_index": "4", "section_name": "ACKNOWLEDGMENTS", "section_text": "We thank Olivier Henaff and Matthias Bethge for fruitful discussions"}, {"section_index": "5", "section_name": "REFERENCES", "section_text": "Balle, Johannes, Valero Laparra, and Eero P. Simoncelli (2015). \"Density Modeling of Images Using a Generalized Normalization Transformation'. In: arXiv e-prints. Presented at the 4th Int. Conf for Learning Representations, 2016. arXiv:1511. 0 6281 -(2016). \"End-to-end optimization of nonlinear transform codes for perceptual quality'. In: arXiv e-prints. Presented at 2016 Picture Coding Symposium. arXiv: 1607. 0500 6 Candes, Emmanuel J. and David L. Donoho (2o02). \"New Tight Frames of Curvelets and Optimal Representations of Objects with C2 Singularities\". In: Comm. Pure Appl. Math. 57, pp. 219-266. Carandini, Matteo and David J. Heeger (2012). \"Normalization as a canonical neural computation\"' In: Nature Reviews Neuroscience 13. D01:10.1038/nrn3136 Deng, J. et al. (20o9). \"ImageNet: A Large-Scale Hierarchical Image Database\". In: IEEE Conf. on Computer Vision and Pattern Recognition. D01:10.1109/CVPR.2009.5206848 Gersho, Allen and Robert M. Gray (1992). Vector Quantization and Signal Compression. Kluwer. 1SBN: 978-0-7923-9181-4. Gray, Robert M. and David L. Neuhoff (1998). \"Quantization\". In: IEEE Transactions on Informa tion Theory 44.6. D01:10.1109/18.720541 Gregor, Karol, Frederic Besse, et al. (2016). \"Towards Conceptual Compression\"'. In: arXiv e-prints arXiv:1604.08772 Gregor, Karol and Yann LeCun (2010). \"Learning Fast Approximations of Sparse Coding\". In: Pro- ceedings of the 27th International Conference on Machine Learning. Heeger, David J. (1992). \"Normalization of cell responses in cat striate cortex'. In: Visual Neuro- science9.2.D01:10.1017/s0952523800009640 Ioffe, Sergey and Christian Szegedy (2015). \"Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariance Shift\"'. In: arXiv e-prints. arXiv:1502. 031 67 Jarrett, Kevin et al. (2oo9). \"What is the Best Multi-Stage Architecture for Object Recognition?\" In: 2009 IEEE 12th International Conference on Computer Vision. D01:10.1109/ 1CCV.200 9. 5 4 5 9 4 6 9 Kingma, Diederik P. and Jimmy Lei Ba (2014). \"Adam: A Method for Stochastic Optimization' In: arXiv e-prints. Presented at the 3rd Int. Conf. for Learning Representations, 2015. arXiv: 1412.6980\nKingma, Diederik P. and Max Welling (2014). \"Auto-Encoding Variational Bayes\". In: arXiv e- prints. arXiv:1312.6114 Laparra, Valero et al. (2016). \"Perceptual image quality assessment using a normalized Laplacian. pyramid\". In: Proceedings of SPIE, Human Vision and Electronic Imaging XXI.. Leshno, Moshe et al. (1993). \"Multilayer Feedforward Networks With a Nonpolynomial Activation Function Can Approximate Any Function\". In: Neural Networks 6.6. D01: 10 . 1016/ S08 93- 6080(05)80131-5 Lewicki, Michael S. and Bruno Olshausen (1998). \"Inferring sparse, overcomplete image codes using an efficient coding framework'. In: Advances in Neural Information Processing Systems 10, pp. 815-821. Lyu, Siwei (2010). \"Divisive Normalization: Justification and Effectiveness as Efficient Coding. Transform\". In: Advances in Neural Information Processing Systems 23, pp. 1522-1530. Malo, Jesus et al. (2o06). \"Non-linear image representation for efficient perceptual coding\". In: IEEE Transactions on Image Processing 15.1. D01: 10.1109/T1P.2005.860325 Mante, Valerio, Vincent Bonin, and Matteo Carandini (2008). \"Functional Mechanisms Shaping Lat- eral Geniculate Responses to Artificial and Natural Stimuli\". In: Neuron 58.4. D01: 10 . 1016/ j.neuron.2008.03.011 Marpe, Detlev, Heiko Schwarz, and Thomas Wiegand (2003). \"Context-Based Adaptive Binary Arithmetic Coding in the H.264/AVC Video Compression Standard\". In: IEEE Transactions on Circuits and Systems for Video Technology 13.7. D01: 10.1109/TCsvT.2003.815173 Netravali, A. N. and J. O. Limb (1980). \"Picture Coding: A Review\". In: Proceedings of the IEEE 68.3.D01:10.1109/PR0C.1980.11647 Oord, Aaron van den, Nal Kalchbrenner, and Koray Kavukcuoglu (2016). \"Pixel Recurrent Neural Networks\". In: arXiv e-prints. arXiv:1601. 06759 Rezende, Danilo Jimenez, Shakir Mohamed, and Daan Wierstra (2014). \"Stochastic Backpropaga- tion and Approximate Inference in Deep Generative Models\". In: arXiv e-prints. arXiv:1401 4 0 82 Rippel, Oren, Jasper Snoek, and Ryan P. Adams (2015). \"Spectral Representations for Convolutional Neural Networks\"'. In: Advances in Neural Information Processing Systems 28, pp. 2449-2457.. Rissanen, Jorma and Glen G. Langdon Jr. (1981). \"Universal modeling and coding\". In: IEEE Trans- actions on Information Theory 27.1. D01:10.1109/T1T.1981.1056282 Schwartz, Odelia and Eero P. Simoncelli (2001). \"Natural signal statistics and sensory gain control\" In: Nature Neuroscience 4.8. D01:10.1038/ 90526 Selesnick, Ivan W., Richard G. Baraniuk, and Nick C. Kingsbury (2005). \"The Dual-Tree Complex Wavelet Transform'. In: IEEE Signal Processing Magazine 22.6. D01: 10 . 1109/MSP . 2005. 1550194 Shannon, Claude E. (1948). \"A Mathematical Theory of Communication'. In: The Bell System Tech- nical Journal27.3.D01:10.1002/j.1538-7305.1948.tb01338.x Simoncelli, Eero P., William T. Freeman, et al. (1992). \"Shiftable Multiscale Transforms\". In: IEEE Transactions on Information Theory 38.2. D01:10.1109/18.119725 Simoncelli, Eero P. and David J. Heeger (1998). \"A model of neuronal responses in visual area MT\". In: Vision Research 38.5.D01:10.1016/s0042-6989 (97) 00183-1 Simoncelli, Eero P. and Bruno Olshausen (2001). \"Natural image statistics and neural representa- tion\"'. In: Annual Review of Neuroscience 24. D0i: 10. 1146/ annurev . neuro. 24.1. 11 9 3 Sinz, Fabian and Matthias Bethge (2013). \"What Is the Limit of Redundancy Reduction with Divi- sive Normalization?\" In: Neural Computation 25.11. D01: 10.1162/NEC0_a_00505 Szegedy, Christian et al. (2013). \"Intriguing properties of neural networks\". In: arXiv e-prints. arXiv: 1312.6199 Theis, Lucas, Aaron van den Oord, and Matthias Bethge (2015). \"A note on the evaluation of gen- erative models'. In: arXiv e-prints. Presented at the 4th Int. Conf. for Learning Representations. arXiv:1511.01844 Toderici, George et al. (2016). \"Full Resolution Image Compression with Recurrent Neural Net- works\". In: arXiv e-prints. arXiv:1608.05148 Wang, Zhou, Eero P. Simoncelli, and Alan Conrad Bovik (2003). \"Multi-Scale Structural Similarity for Image Quality Assessment\". In: Conf. Rec. of the 37th Asilomar Conf. on Signals, Systems and Computers, 2004. D01:10.1109/ACSSC.2003.1292216 Wintz, Paul A. (1972). \"Transform Picture Coding\". In: Proceedings of the IEEE 60.7. D01: 10 . 1109/PR0C.1972.8780\nAs described in the main text, our analysis transform consists of three stages of convolution, down sampling, and GDN. The number and size of filters, downsampling factors, and connectivity fro. layer to layer are provided in figure|8|for the grayscale transforms. The transforms for RGB images. and for high bit rates differ slightly in that they have an increased number of channels in each stage. These choices are somewhat ad-hoc, and a more thorough exploration of alternative architectures. could potentially lead to significant performance improvements..\nWe have previously shown that GDN is highly efficient in Gaussianizing the local joint statistics. of natural images (Balle, Laparra, and Simoncelli, 2015). Even though Gaussianization is a quite different optimization problem than the rate-distortion objective with the set of constraints defined. above, it is similar in that a marginally independent latent model is assumed in both cases. When. optimizing for Gaussianization, the exponents in the parametric form of GDN control the tail be-. havior of the Gaussianized densities. Since tail behavior is less important here, we chose to simplify. the functional form, fixing the exponents as well as forcing the weight matrix to be symmetric (i.e.,.\nThe synthesis transform is meant to function as an approximate inverse transformation, so we con struct it by applying a principle known from the LISTA algorithm (Gregor and LeCun, 2010) to the. fixed point iteration previously used to invert the GDN transform (Balle, Laparra, and Simoncelli 2015). The approximate inverse consists of one iteration, but with a separate set of parameters from the forward transform, which are constrained in the same way, but trained separately. We refer tc this nonlinear transform as \"inverse GDN\" (IGDN).\nThe full model (analysis and synthesis filters, GDN and IGDN parameters) were optimized, fo. each X, over a subset of the ImageNet database (Deng et al.,2009) consisting of 6507 images. We applied a number of preprocessing steps to the images in order to reduce artifacts and other unwantec. contaminations: first, we eliminated images with excessive saturation. We added a small amount of uniform noise, corresponding to the quantization of pixel values, to the remaining images. Finally we downsampled and cropped the images to a size of 256 256 pixels each, where the amount of. downsampling and cropping was randomized, but depended on the size of the original image. Ir. order to reduce high-frequency noise and compression artifacts, we only allowed resampling factors. less than 0.75, discarding images that were too small to satisfy this constraint..\nanalysis synthesis 811 117|9 x 9 817 X 817|9 X 9 [ 86 60uo] 8o1XI|6 x6|Au03 u nannnann CaN GPN ICDN ICDN LCDN GPN 19838 B387 00960t 8387 00960t 8387 8387 409600 B387 409600 8384 I022I\nFigure 8: Parameterization of analysis (ga) and synthesis (gs) transforms for grayscale images conv: affine convolution q1)/d6), with filter support (x y) and number of channels (outputinput) down-/upsample: regular down-/upsampling (2)/(5) by given factor (implemented jointly with the adjacent convolution). GDN/IGDN: generalized divisive normalization across channels (3), and its approximate inverse (4); see text. Number of parameters for each layer given at the bottom.\nWe represented each of the marginals py, as a piecewise linear function (i.e., a linear spline), using 10 sampling points per unit interval. The parameter vector y(i) consists of the value of py. at these. sampling points. We did not use Adam to update y(i); rather, we used ordinary stochastic gradient descent to minimize the negative expected likelihood:.\n=-Ey Pyr(Yi;Y)(\nSince CABAC operates on binary values, the quantized values in q need to be converted to binary decisions. We follow a simple scheme inspired by the encoding of H.264/AVC transform coefficients as detailed by Marpe, Schwarz, and Wiegand (2003). For each qi, we start by testing if the encoded value is equal to the mode of the distribution. If this is the case, the encoding of qi is completed If not, another binary decision determines whether it is smaller or larger than the mode. Following\nWe used the Adam optimization algorithm (Kingma and Ba,2014) to obtain values for the parameters and 0, starting with a = 10-4, and subsequently lowering it by a factor of 10 whenever the improvement of both rate and distortion stagnated, until a = 10-7. Linear filters were parameterized using their discrete cosine transform (DCT) coefficients We found this to be slightly more effective in speeding up the convergence than discrete. Fourier transform (DFT) parameterization (Rippel, Snoek, and Adams,2015).. We parameterized the GDN parameters in terms of the elementwise relationship.\nBk.i= 10\nThe squaring ensures that gradients are smaller around parameter values close to 0, a regime in which the optimization can otherwise become unstable. To obtain an unambiguous map- ping, we projected each k., onto the interval [2-5, 0o) after each gradient step. We applied the same treatment to /k,j, and additionally averaged k,i with its transpose after each step in order to make it symmetric as explained above. The IGDN parameters were treated in the same way. . To remove the scaling ambiguity between the each linear transform and its following nonlinearity (or preceding nonlinearity, in the case of the synthesis transform), we re- normalized the linear filters after each gradient step, dividing each filter by the square root of the sum of its squared coefficients. For the analysis transform, the sum runs over space and all input channels, and for the synthesis transform, over space and all output channels.\nand renormalized the marginal densities after each step. After every 10 gradient steps, we used a heuristic to adapt the range of the spline approximation to cover the range of values of y; obtained on the training set.\nWe implemented an entropy code based on the context-adaptive binary arithmetic coding (CABAC). framework defined by Marpe, Schwarz, and Wiegand (2003). Arithmetic entropy codes are designed. to compress discrete-valued data to bit rates closely approaching the entropy of the representation, assuming that the probability model used to design the code describes the data well. The following. information was encoded into the bitstream:.\nthe size of the image (two 16-bit integers, bypassing arithmetic coding). whether the image is grayscale or RGB (one bit, bypassing arithmetic coding),. the value of X (one 16-bit integer, bypassing arithmetic coding), which provides an index. for the parameters of the analysis and synthesis transforms as well as the initial probabil. ity models for the entropy codes (these are fixed after optimization, and assumed to be. available to encoder and decoder).. the value of each element of q, iterating over channels, and over space in raster-scan order using the arithmetic coding engine..\nqi = Qi,mode qi > qi,mode END qi = qi.mode - qi > qi,mode +1 qi = qi,mode - 2 END END qi > qi,mode + 2 END END qi = Qi,min qi > qi,max EG fallback END END EG fallback.\nFigure 9: Binarization of a quantized value for binary arithmetic coding. Each circle represents a binary decision encoded with its own CABAC context. Arrows pointing left represent \"false\" arrows pointing right \"true\"'. On reaching END, the encoding of the quantized value is completed On reaching EG fallback, the magnitude of q, which falls outside of the range [qi,min, Qi,max] is encoded using an exponential Golomb code, bypassing the arithmetic coding engine.\nAdaptive codes, such as CABAC, can potentially further improve bit rates, and to some extent correct model error, by adapting the probability model on-line to the statistics of the data. In our code, this is achieved by sharing the marginal probability model Pq, of each element in q across space within each channel. We derived the initial probability models by subsampling the continuous densities py, determined during optimization, as in (10). However, note that due to the simple. raster-scan ordering, the coding scheme presented above only crudely exploits spatial adaptation of the probability model compared to existing coding methods such as JPEG 2000 and H.264/AVC.. Thus, the performance gains compared to a well-designed non-adaptive entropy code are relatively small (figure[10), and likely smaller than those achieved by the entropy code in JPEG 2000, to which. we compare.\ni,mode qi > qi,mode END qi = qi,mode - qi >qi,mode +1 qi = qi,mode -- END END qi > qi,mode + 2 END END 1i qi > qi,max m1 EG fallback END END EG fallback\n34 32 30 [8p] SNSd 28 26 24 proposed non-adaptive 22 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 bit rate [bit/px]\nFigure 10: Rate-distortion comparison of adaptive vs. non-adaptive entropy coding, averaged (for each value of X) over the 24 images in the Kodak test set. The non-adaptive entropy code is simulated by computing the entropy of q assuming the probability model determined during optimization (which is also used to initialize the adaptive code).\nthat, each possible integer value is tested in turn, which yields a bifurcated chain of decisions as illustrated in figure[9] This process is carried out until either one of the binary decisions determines qi, or some minimum (qi,min) or maximum (qi,max) value is reached. In case qi is outside of that. range, the difference between it and the range bound is encoded using an exponential Golomb code. bypassing the arithmetic coding engine.\n350 JPEG JPEG 2000 (target rate) 300 JPEG 2000 (target quality) proposed 250 200 E 150 100 50 0 0.0 0.2 0.4 0.6 0.8 1.0 1.2 bit rate [bit/px].\nFigure 11: Summary rate-distortion curves, computed by averaging results over the 24 images in the Kodak test set. Each point is connected by translucent lines to the set of 24 points corresponding to the individual image R-D values from which it was derived. JPEG results are averaged over images compressed with identical quality settings. Results of the proposed method are averaged over images compressed with identical X values (and thus, computed with exactly the same forward and inverse transforms). The two JPEG 2000 curves are computed with the same implementation, by averaging over images compressed with the same target rate or the same target quality. Note that these two methods of selecting points to be averaged lead to significantly different average results.\nAlthough it is desirable to summarize and compare the rate-distortion behavior of JPEG, JPEG 2000 and our method across an image set, it is difficult to do this in a way that is fair and interpretable First, rate-distortion behavior varies substantially across bit rates for different images. For example. for the image in figure[12 our method achieves the same MSE with roughly 50% of the bits needed by JPEG 2000 for low rates, and about 30% for high rates. For the image in figure[17] the gains. are more modest, although still significant through the range. But for the image in figure 15[ our. method only slightly outperforms JPEG 2000 at low rates, and under-performs it at high rates. Note that these behaviors are again different for MS-SSIM, which shows a significant improvement for. all images and bit rates (consistent with their visual appearance)..\nSecond, there is no obvious or agreed-upon method for combining rate-distortion curves acros. images. More specifically, one must decide which points in the curves to combine. For our method it is natural to average the MSE and entropy values across images compressed using the same choic. of X, since these are all coded and decoded using exactly the same representation and quantizatioi. scheme. For JPEG, it seems natural to average over images coded at the same \"quality\" setting. which appear to be coded using the same quantization choices. The OpenJPEG implementation o JPEG 2000 we use allows selection of points on the rate-distortion curve either through specificatior. of a target bit rate, or a target quality. This choice has no effect on rate-distortion plots for individua. images (verified, but not shown), but has a substantial effect when averaging over images, since the. two choices lead one to average over a different set of R-D points. This is illustrated in figure[11 Even if points were selected in exactly the same fashion for each of the methods (say, matched t. a given set of target rates), summary plots can still over- or underemphasize high rate vs. low rat performance.\nWe conclude that summaries of rate- distortion are of limited use. Instead, we encourage the reade. to browse our extensive collection of test images, with individual rate-distortion plots for each im age, available at|http: //www. cns.nyu.edu/~1cv/iclr2017in both grayscale and RGB.\nIn the following pages, we show additional example images, compressed at relatively low bit rates. in order to visualize the qualitative nature of compression artifacts. On each page, the JPEG 2000 image is selected to have the lowest possible bit rate that is equal or greater than the bit rate oi the proposed method. In all experiments, we compare to JPEG with 4:2:0 chroma subsampling and the OpenJPEG implementation of JPEG 2000 with the default \"multiple component transform\" For evaluating PSNR, we use the JPEG-defined conversion matrix to convert between RGB anc Y'CbCr. For evaluating MS-SSIM (Wang, Simoncelli, and Bovik,2003), we used only the resulting luma component. Original images are not shown, but are available online, along with compressed riety of other bit rates. ath+ + r TATTATTAT Pd11 I/i c1 r2011\n1.00 42 40 0.98 38 WISS-SW ewn| [gp] 0.96 36 PNSd 34 0.94 32 ewnj 0.92 30 JPEG 28 JPEG 0.90 JPEG 2000 JPEG 2000 26 proposed proposed 0.88 24 0.0 0.2 0.4 0.6 0.8 1.0 1.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 bit rate [bit/px] bit rate [bit/px]\nProposed method, 3749 bytes (0.106 bit/px), PSNR: luma 32.43 dB/chroma 34.00 dB, MS-SSIM: 0.9767\nFigure 12: RGB example, from our personal collection, downsampled and cropped to 752 376 pixels.\nJPEG 2000, 3769 bytes (0.107 bit/px), PSNR: luma 29.49 dB/chroma 32.99 dB, MS-SSIM: 0.9520\n1.00 32 0.95 30 WISS-SW ewnj [8p] dNSd ewn] 0.90 28 0.85 26 0.80 24 JPEG JPEG 0.75 22 JPEG 2000 JPEG 2000 proposed proposed 0.70 20 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 bit rate [bit/px] bit rate [bit/px]\nProposed method, 2978 bytes (0.084 bit/px), PSNR: luma 22.93 dB/chroma 31.45 dB, MS-SSIM: 0.8326\nFigure 13: RGB example, from our personal collection, downsampled and cropped to 752 376 pixels.\nJPEG 2000, 2980 bytes (0.084 bit/px), PSNR: luma 22.53 dB/chroma 31.09 dB, MS-SSIM: 0.8225\n1.00 38 0.98 36 0.96 34 [8P] dNSd ewn] WISS-SW 0.94 2 0.92 30 0.90 ewn 28 0.88 26 0.86 JPEG JPEG JPEG 2000 24 JPEG 2000 0.84 proposed proposed 0.82 22 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 bit rate [bit/px] bit rate [bit/px]\nProposed method, 6680 bytes (0.189 bit/px), PSNR: luma 29.31 dB/chroma 36.17 dB, MS-SSIM: 0.9695\nFigure 14: RGB example, from our personal collection, downsampled and cropped to 752 376 pixels.\nJPEG 2000, 6691 bytes (0.189 bit/px), PSNR: luma 28.45 dB/chroma 35.32 dB, MS-SSIM: 0.9586\n1.00 32 30 0.95 WISS-SW [8P] dNSd ewn] 28 0.90 26 ewn| 0.85 24 0.80 JPEG JPEG 22 JPEG 2000 JPEG 2000 proposed proposed 0.75 20 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 bit rate [bit/px] bit rate [bit/px]\nProposed method, 5908 bytes (0.167 bit/px), PSNR: luma 23.38 dB/chroma 31.86 dB, MS-SSIM: 0.9219\nFigure 15: RGB example, from our personal collection, downsampled and cropped to 752 376 pixels.\nJPEG 2000, 5908 bytes (0.167 bit/px), PSNR: luma 23.24 dB/chroma 31.04 dB, MS-SSIM: 0.8803\n1.00 36 0.98 34 0.96 [8p] 8NSd 32 WIS? 0.94 30 SW 0.92 ewn 28 ewn 0.90 26 0.88 JPEG JPEG JPEG 2000 24 0.86 JPEG 2000 proposed proposed 0.84 22 0.0 0.2 0.4 0.6 0.8 1.0 1.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 bit rate [bit/px] bit rate [bit/px]\nProposed method, 5683 bytes (0.161 bit/px), PSNR: luma 27.78 dB/chroma 32.60 dB, MS-SSIM: 0.9590\nJPEG 2000, 5724 bytes (0.162 bit/px), PSNR: luma 25.36 dB/chroma 31.20 dB, MS-SSIM: 0.9202\nFigure 16: RGB example, from our personal collection, downsampled and cropped to 752 376 pixels.\n1.00 32 0.95 30 0.90 28 [gp] BNSd WISS- 0.85 26 0.80 24 0.75 22 JPEG JPEG 0.70 JPEG 2000 20 JPEG 2000 proposed proposed 0.65 18 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 bit rate [bit/px] bit rate [bit/px]\nProposed method, 6021 bytes (0.170 bit/px), PSNR: 24.12 dB, MS-SSIM: 0.9292\nJPEG 2000, 6037 bytes (0.171 bit/px), PSNR: 23.47 dB, MS-SSIM: 0.9036\nFigure 17: Grayscale example, from our personal collection, downsampled and cropped to 752 376 pixels.\nProposed method, 4544 bytes (0.129 bit/px), PSNR: 31.01 dB, MS-SSIM: 0.9644\nJPEG 2000, 4554 bytes (0.129 bit/px), PSNR: 30.17 dB, MS-SSIM: 0.9546\n1.00 45 0.98 40 0.96 0.94 WISS-SW [8p] UNSd 35 0.92 0.90 30 0.88 0.86 JPEG JPEG 25 JPEG 2000 JPEG 2000 0.84 proposed proposed 0.82. 20 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 bit rate [bit/px] bit rate [bit/px]\nFigure 18: Grayscale example, from our personal collection, downsampled and cropped to 752 376 pixels.\n1.00 42 40 0.95 38 36 WISS-SI 0.90 [gp] 34 PNSdR 32 0.85 30 JPEG 28 0.80 JPEG JPEG 2000 JPEG 2000 26 proposed proposed 0.75 24. 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 bit rate [bit/px] bit rate [bit/px]\nProposed method, 3875 bytes (0.110 bit/px), PSNR: 31.75 dB, MS-SSIM: 0.9577\nJPEG 2000, 3877 bytes (0.110 bit/px), PSNR: 31.24 dB, MS-SSIM: 0.9511\nFigure 19: Grayscale example, from our personal collection, downsampled and cropped to 752 376 pixels.\n1.00 45 0.95 40 [8p] SNSd WISS-SW 0.90 35 0.85 30 JPEG JPEG 0.80 25 JPEG 2000 JPEG 2000 - proposed proposed 0.75. 20 0.0 0.2 0.4 0.6 0.8 1.0 1.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 bit rate [bit/px] bit rate [bit/px]\nProposed method, 6633 bytes (0.188 bit/px), PSNR: 28.83 dB, MS-SSIM: 0.9681\nJPEG 2000. 6691 bytes (0.189 bit/px), PSNR: 28.83 dB, MS-SSIM: 0.9651\nFigure 20: Grayscale example, from our personal collection, downsampled and cropped to 752 376 pixels.\n1.00 34 0.95 32 30 0.90 [8p] 28 0.85 ISS-SI PNSd 26 0.80 V 24 0.75 22 JPEG JPEG 0.70 JPEG 2000 20 JPEG 2000 proposed proposed 0.65 18 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 bit rate [bit/px] bit rate [bit/px]\nProposed method, 10130 bytes (0.287 bit/px), PSNR: 25.27 dB, MS-SSIM: 0.9537\nJPEG 2000, 10197 bytes (0.289 bit/px), PSNR: 24.41 dB, MS-SSIM: 0.9320\nFigure 21: Grayscale example, from the Kodak test set, downsampled and cropped to 752 376 pixels."}] |
BydARw9ex | [{"section_index": "0", "section_name": "CAPACITY AND TRAINABILITY IN RECURRENT NEURAL NETWORKS", "section_text": "Jasmine Collins* Jascha Sohl-Dickstein & David Sussillo\nTwo potential bottlenecks on the expressiveness of recurrent neural networks. (RNNs) are their ability to store information about the task in their parameters. and to store information about the input history in their units. We show experimentally. that all common RNN architectures achieve nearly the same per-task and per-uni1. capacity bounds with careful training, for a variety of tasks and stacking depths. They can store an amount of task information which is linear in the number of. parameters, and is approximately 5 bits per parameter. They can additionally store. approximately one real number from their input history per hidden unit. We further. find that for several tasks it is the per-task parameter capacity bound that determines. performance. These results suggest that many previous results comparing RNN. architectures are driven primarily by differences in training effectiveness, rathei. than differences in capacity. Supporting this observation, we compare training. difficulty for several architectures, and show that vanilla RNNs are far more difficult. to train, yet have slightly higher capacity. Finally, we propose two novel RNN. architectures, one of which is easier to train than the LSTM or GRU for deeply. stacked architectures."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Research and application of recurrent neural networks (RNNs) have seen explosive growth over. the last few years, (Martens & Sutskever2011] Graves et al.]2009), and RNNs have become the central component for some very successful model classes and application domains in deep learning. (speech recognition (Amodei et al.|2015), seq2seq (Sutskever et al.2014), neural machine translation (Bahdanau et al.[2014), the DRAW model (Gregor et al.|2015), educational applications (Piech et al. 2015), and scientific discovery (Mante et al.] 2013)). Despite these recent successes, it is widely. acknowledged that designing and training the RNN components in complex models can be extremely tricky. Painfully acquired RNN expertise is still crucial to the success of most projects..\nOne of the main strategies involved in the deployment of RNN models is the use of the Long Shor. Term Memory (LSTM) networks (Hochreiter & Schmidhuber| 1997), and more recently the Gatec Recurrent Unit (GRU) proposed by|Cho et al.(2014); Chung et al.(2014) (we refer to these as gated architectures). The resulting models are perceived as being more easily trained, and achieving lower error. While it is widely appreciated that RNNs are universal approximators (Doya1993 an unresolved question is the degree to which gated models are more computationally powerful ir practice, as opposed to simply being easier to train..\nHere we provide evidence that the observed superiority of gated models over vanilla RNN models is almost exclusively driven by trainability. First we describe two types of capacity bottlenecks. that various RNN architectures might be expected to suffer from: parameter efficiency related to. learning the task, and the ability to remember input history. Next, we describe our experimental setup where we disentangle the effects of these two bottlenecks, including training with extremely thorough hyperparameter (HP) optimization. Finally, we describe our capacity experiment results"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "(per-parameter and per-unit), as well as the results of trainability experiments (training on extremely hard tasks where gated models might reasonably be expected to perform better)."}, {"section_index": "3", "section_name": "1.1 CAPACITY BOTTLENECKS", "section_text": "There are several potential bottlenecks for RNNs, for example: How much information about the tas. can they store in their parameters? How much information about the input history can they store i. their units? These first two bottlenecks can both be seen as memory capacities (one for the task, one. for the inputs), for different types of memory..\nAnother, different kind of capacity stems from the set of computational primitives an RNN is abl to perform. For example, maybe one wants to multiply two numbers. In terms of number o. units and time steps, this task may be very straight-forward using some specific computationa primitives and dynamics, but with others it may be extremely resource heavy. One might expec that differences in computational capacity due to different computational primitives would play a. large role in performance. However, despite the fact that the gated architectures are outfitted with multiplicative primitive between hidden units, while the vanilla RNN is not, we found no evidence o a computational bottleneck in our experiments. We therefore will focus only on the per-paramete. capacity of an RNN to learn about its task during training, and on the per-unit memory capacity of ar. RNN to remember its inputs"}, {"section_index": "4", "section_name": "1.2 EXPERIMENTAL SETUP", "section_text": "RNNs have many HPs, such as the scalings of matrices and biases, and the functional form of certain nonlinearities. There are additionally many HPs involved in training, such as the choice of optimizer, and the learning rate schedule. In order to train our models we employed a HP tuner that uses a Gaussian Process model similar to Spearmint (see Appendix, section on HP tuning and|Desautels et al.(2014); Snoek et al.[(2012) for related work). The basic idea is that one requests HP values from the tuner, runs the optimization to completion using those values, and then returns the validation loss This loss is then used by the tuner, in combination with previously reported losses, to choose new HP values such that over many experiments, the validation loss is minimized with respect to the HPs. For our experiments, we report the evaluation loss (separate from the validation loss returned to the HP optimizer, except where otherwise noted) after the HP tuner has highly optimized the task (hundreds to many thousands of experiments for each architecture and task).\nIn our studies we used a variety of well-known RNN architectures: standard RNNs such as the vanilla. RNN and the newer IRNN (Le et al.]2015), as well as gated RNN architectures such as the GRU and LSTM. We rounded out our set of models by innovating two novel (to our knowledge) RNN. architectures (see Section[1.4) we call the Update Gate RNN (UGRNN), and the Intersection RNN (+RNN). The UGRNN is a minimally gated' RNN architecture that has only a coupled gate between the recurrent hidden state, and the update to the hidden state. The +RNN uses coupled gates to gate. both the recurrent and depth dimensions in a straightforward way..\nFor each of our 6 tasks, 6 RNN variants, 4 depths, and 6+ model sizes, we ran the HP tuner in order tc optimize the relevant loss function. Typically this resulted in many hundreds to several thousands o. HP evaluations, each of which was a full training run up to millions of training steps. Taken together this amounted to CPU-millennia worth of computation.\nNot all experiments used a depth of 8, due to limits on computational resources\nTo further explore the various strengths and weaknesses of each RNN architecture, we also used a. variety of network depths: 1, 2, 4, 8, in our experiments'|In most experiments, we held the number. of parameters fixed across different architectures and different depths. More precisely, for a given. experiment, a maximum number of parameters was set, along with an input and output dimension The number of hidden units per layer was then chosen such that the number of parameters, summed. across all layers of the network, was as large as possible without exceeding the allowed maximum..\nthe VC dimension of RNNs, which provides an upper bound on their task-capacity (defined in Section 2.1). These upper bounds are not a close match to our experimental results. For instance, we find that performance saturates rapidly in terms of the number of unrolling steps (Figure|2b), while the relevant bound increases linearly with the number of unrolling steps. \"Unrolling\" refers to recurrent computation through time.\nEmpirically,Karpathy et al.(2015) have studied how LSTMs encode information in character-based text modeling tasks. Further, Sussillo & Barak(2013) have reverse-engineered the vanilla RNN trained on simple tasks, using the tools and language of nonlinear dynamical systems theory. In Foerster et al.[(2016) the behavior of switched affine recurrent networks is carefully examined.\nThe ability of RNNs to store information about their input has been better studied, in both th. context of machine learning and theoretical neuroscience. Previous work on short term memor. traces explores the tradeoffs between memory fidelity and duration, for the case that a new input i presented to the RNN at every time step (Jaeger & Haas[2004] Maass et al.2002|White et al.[ 2004 Ganguli et al.|2008f Charles et al.[[2014). We use a simpler capacity measure consisting only c the ability of an RNN to store a single input vector. Our results suggest that, contrary to commo. belief, the capacity of RNNs to remember their input history is not a practical limiting factor on thei. performance.\nThe precise details of what makes an RNN architecture perform well is an extremely active researcl. field (e.g. Jozefowicz et al.(2015)). A highly related article is|Greff et al.(2015), in which the authors. used random search of HPs, along with systematic removal of pieces of the LSTM architecture tc. determine which pieces of the LSTM were more important than the others. Our UGRNN architecture. is directly inspired by the large impact of removing the forget gate from the LSTM (Gers et al.[|1999 Zhou et al.(2016) introduced an architecture with minimal gating that is similar to the UGRNN, bu. is directly inspired by the GRU. An in-depth comparison between RNNs and GRUs in the contex. of end-to-end speech recognition and a limited computational budget was conducted in Amode. et al.(2015). Further, ideas from RNN architectures that improve ease of training, such as forge. gates (Gers et al.1999), and copying recurrent state from one time step to another, are making thei way into deep feed-forward networks as highway networks (Srivastava et al.] 2015) and residua. connections (He et al.2015), respectively. Indeed, the +RNN was inspired in part by the couplec. depth gate ofSrivastava et al.(2015)."}, {"section_index": "5", "section_name": "1.4 RECURRENT NEURAL NETWORK ARCHITECTURES", "section_text": "Note the IRNN is identical in structure to the vanilla RNN, but with an identity initialization for wh zero initialization for the biases, and s = ReLU only."}, {"section_index": "6", "section_name": "UGRNN - UPDATE GATE RNN", "section_text": "Based onGreff et al.(2015), where they noticed the forget gate \"was crucial' to LSTM performance. we tried an RNN variant where we began with a vanilla RNN and added a single gate. This gate. determines whether the hidden state is carried over from the previous time step, or updated - hence. it is an update gate. An alternative way to view the UGRNN is a highway layer gated through time\nBelow we briefly define the RNN architectures used in this study. Unless otherwise stated W denotes a matrix, b denotes a vector of biases. The symbol xt is the input at time t, and h, is the hidden state at time t. Remaining vector variables represent intermediate values. The function o() denotes the logistic sigmoid function and s() is either tanh or ReLU, set as a HP (see Appendix, Section RNN HPs for the complete list of HPs). Initial conditions for the networks were set to a learned bias Finally, it is a well-known trick of the trade to initialize the gates of an LSTM or GRU with a large bias to induce better gradient flow. We included this parameter, denoted as bfg, and tuned it along with all other HPs.\nht = s(Whht-1+ W*xt+ bh\nW ht-1+ Wrx xt+br rt = ht-1+ Wux xt+ bu + bfg ut = (Wuh (rt .ht-1)+Wcx xt+bc) Ct = s(Wch h=uht-1+1ut)Ct\n+ =0 -1+WIx Xt+ b Wfx xt+ bf f = Ot. 1 cin =s(W ht-1+ Wcx xt+ bc Ct=ftCt-1+itC ht-1+ Wox xt + bo Ot = W ht = Ot: tanh(ct)\nhin = s2 (Whh ht-1 + Whx xt + bh Wg'x Xt+b8y + 1 Wg\"x X yt=gfxt+1 1 gt ht=gtht-1+\nIn practice we used ReLU for s1 and tanh for s2\nA foundational result in machine learning is that a single-layer perceptron with N2 parameters can store at least 2 bits of information per parameter (Cover1965][Gardner1988] Baldi & Venkatesh 1987). More precisely, a perceptron can implement a mapping from 2N, N-dimensional, input vectors to arbitrary N-dimensional binary output vectors, subject only to the extremely weak restriction that the input vectors be in general position. RNNs provide a far more complex input-output mapping with hidden units, recurrent dynamics, and a diversity of nonlinearities. Nonetheless, we wondered if there were analogous capacity results for RNNs that we might be able to observe empirically.\nCt = s(Wch ht-1 + Wcx xt + bc) (2) gt = 0(Wgh ht-1+ Wgx xt + b8 + bfg (3) ht=gtht-1+1-gt)Ct (4) RRENT UNIT (CHO ET AL.2014) rt = 0(Wrh ht-1 + Wrx xt + b (5) ut = o(Wuh ht-1+ Wux xt + bu + b (6) Ct = s(Wch (rt:ht-1) + Wcx xt + bc) (7) ht=Utht-1+1-ut)Ct (8) T TERM MEMORY(HOCHREITER & SCHMIDHUBER 1997 it = o(Wih ht-1 + Wix xt + bi) (9) ft =0(Wfh ht-1+Wfx xt+ bf+ bfg) (10) cin = s(Wch ht-1+ Wcx xt + bc) (11) Ct = ft Ct-1+ iz :cin (12) Ot = 0 (Woh ht-1 + W0x xt + bo (13) ht = Ot : tanh(ct) (14)\nht-1+ Wcx xt+bc Ct = s(W ht-1+ Wgx xt + b8+ bfg gt = (W8 ht=gtht-1+1-gtC\nDue to the success of the UGRNN for shallower architectures in this study (see later figures on trainability), as well as some of the observed trainability problems for both the LSTM and GRU for deeper architectures (e.g. Figure|4h) we developed the Intersection RNN (denoted with a '+') architecture with a coupled depth gate in addition to a coupled recurrent gate. Additional influences for this architecture were the recurrent gating of the LSTM and GRU, and the depth gating from the highway network (Srivastava et al.2015). This architecture has recurrent input, h-1, and depth input, xt. It also has recurrent output, ht, and depth output, yt. Note that this architecture only applies between layers where x and yt have the same dimension, and is not appropriate for networks with a depth of 1 (we exclude depth one +RNNs in our experiments).\nAs we will show in Section[3] tasks with complex temporal dynamics, such as language modeling exhibit a per-parameter capacity bottleneck that explains the performance of RNNs far better than a per-unit bottleneck. To make the experimental design as simple as possible, and to remove potential confounds stemming from the choice of temporal dynamics, we study per-parameter capacity using a task inspired by Gardner|(1988). Specifically, to measure how much task-related information can be stored in the parameters of an RNN, we use a memorization task, where a random static input is injected into an RNN, and a random static output is read out some number of time steps later. We emphasize that the same per-parameter bottleneck that we find in this simplified task also arises in more temporally complex tasks, such as language modeling.\nAt a high level, we draw a fixed set of random inputs and random labels, and train the RNN to map random inputs to randomly chosen labels via cross-entropy error. However, rather than returning the cross-entropy error to the HP tuner (as is normally done), we instead return the mutual informatior between the RNN outputs and the true labels. In this way, we can treat the number of input-outpu mappings as a HP, and the tuner will select for us the correct number of mappings so as to maximize the mutual information between the RNN outputs and the labels. From this mutual informatior we compute bits per parameter, which provides a normalized measurement of how much the RNN learned about the task.\nMore precisely, we draw datasets of binary inputs X and target binary labels Y at uniform from the set of all binary datasets, X ~ = {0, 1}ninxb, Y ~ y = {0, 1}1b, where b is the number of samples, and nin is the dimensionality of the inputs. Number of samples, b, is treated as a HP and in practice the optimal dataset size is very close to the bits of mutual information between true and predicted labels. This trend is demonstrated in Figure[App.1|in the Appendix. For each value of b. the RNN is trained to minimize the cross entropy of the network output with the true labels. We. write the output of the RNN for all inputs as Y = f (X), with corresponding random variable V. We. are interested in the mutual information I (V; V ) between the true class labels and the class labels predicted by the RNN. This is the amount of (directly recoverable) information that the RNN has stored about the task. In this setting, it is calculated as.\n= H(Y) -H =b+b(plog2p+(1-p) log2(1-p))\nwhere p is the fraction of correctly classified samples. The number 6 is then adjusted, along with all"}, {"section_index": "7", "section_name": "2.1.2 RESULTS", "section_text": "Five Bits per Parameter Examining the results of Figure[1] we find the capacity of all architecture is roughly linear in the number of parameters, across several orders of magnitude of parameter coun We further find that the capacity is between 3 and 6 bits per parameter, once again across al architectures, depths 1, 2 and 4, and across several orders of magnitude in terms of number o parameters. Given the possibility of small size effects, and a larger portion of weights used as biase at a small number of parameters, we believe our estimates for larger networks are more reliable. Thi leads us to a bits per parameter estimate of approximately 5, averaging over all architectures and al depths. Finally, we note that the per-parameter task capacity increases as a function of the number o unrollings, though with diminishing gains (Figure2b).\nThe finding that our results are consistent across diverse architectures and scales is even more surprising, since prior to these experiments it was not clear that capacity would even scale linearly\nwith the number of parameters. For instance, previous results on model compression - by reducing the number of parameters (Yang et al.]|2015), or by reducing the bit depth of parameters (Hubara et al. 2016) might lead one to predict that different architectures use parameters with vastly different efficiencies, and that task capacity increases only sublinearly with parameter count.\nGating Slightly Reduces Capacity yWhile overall, the different architectures performed very. similarly, there are some capacity differences between architectures that appear to hold up across. most depths and parameter counts. To quantify these differences we constructed a table showing the change in the number of parameters one would need to switch from one architecture to another, while. maintaining equivalent capacity (Figure|1j). One trend that emerged from our capacity experiments is. a slightly reduced capacity as a function of \"gatedness\". Putting aside the IRNN, which performed the. worst and is discussed below, we noticed that across all depths and all model sizes, the performance. was on average RNN > UGRNN > GRU > LSTM > +RNN. The vanilla RNN has no gates, the UGRNN has one, while the remaining three have two or more..\nFigure 1: All neural network architectures can store approximately five bits per parameter about a task, with only small variations across architectures. (a) Stored bits as a function of network size These numbers represent the maximum stored bits across 1000+ HP optimizations with 5 time steps unrolled at each network size for all levels of depth. (b-d) Same as (a), but each level of depth shown separately. (e-h) Same as (a-d) but showing bits per parameter as a function of network size. (i) The value in cell (x, y) is the multiplier for the number of parameters needed to give the architecture on the x-axis the same capacity as the architecture on the y-axis. Capacities are measured by averaging the maximum stored bits per parameter for each architecture across all sizes and levels of depth\nReLUs Reduce Capacity In our capacity tasks, the IRNN performed noticeably worse than all other architectures, reaching a maximum bits per parameter of roughly 3.5. To determine if this performance drop was due to the ReLU nonlinearity of the IRNN, or its identity initialization, we sorted through the RNN and UGRNN results (which both have ReLU and tanh as choices for the nonlinearity HP) and looked at the maximum bits per parameter when only optimizations using ReLL\nAll depths 1 layer 2 layers 4 layers (a) (b) (c) (d) 105 105 105 10 Bits 104 104 104 104 103 104 103 104 103 104 103 104 (e) (f) (g) (h) 5 5 5 4 4 4 4 3 3 3 2 2 2 0 0 103 104 103 104 103 104 103 104 Number of parameters Number of parameters Number of parameters Number of parameters rnn ugrnn gru Istm +rnn (i) rnn ugrnn gru Istm +rnn 1.50 rnn 1.000 1.037 1.114 1.140 1.165 1.35 ugrnn 0.965 1.000 1.075 1.099 1.124 1.20 1.05 gru 0.898 0.931 1.000 1.023 1.046 0.90 Istm 0.877 0.910 0.977 1.000 1.023 0.75 +rnn 0.858 0.890 0.956 0.978 1.000 0.60\nFigure 2: Additional RNN capacity analysis. (a) The effect of the ReLU nonlinearity on capacity Solid lines indicate bits per parameter for 1-layer architectures (same as Figure 1b), where both tanh and ReLU are nonlinearity choices for the HP tuner. Dashed lines show the maximum bits pe parameter for each architecture when only results achieved by the ReLU nonlinearity are considered (b) Bits per parameter as a function of the number of time steps unrolled. (c) L2 error curve for al architectures of all depths on the memory throughput task. The curve shows the error plotted as a function of the number of units for a random input of dimension 64 (black vertical line). All networks with with less than 64 units have error in reconstruction, while all networks with number of units greater than 64 nearly perfectly reconstruct the random input.\nAn additional capacity bottleneck in RNNs is their ability to store information about their inputs ove time. It may be plainly obvious that an IRNN, which is essentially an integrator, can achieve perfec. memory of its inputs if the number of inputs is less than or equal to the number of hidden units, bu. it is not so clear for some of the more complex architectures. So we measured the per-unit inpi. memory empirically. Figure2p shows the intuitive result that every RNN architecture (at every dept. and number of parameters) we studied can reconstruct a random nn. dimensional input at some tim in the future, if and only if the number of hidden units per layer in the network, nn, is greater than o. equal to nin Moreover, regardless of RNN architecture, the error in reconstructing the input follow. the same curve as a function of the number of hidden units for all RNN variants, corresponding t. reconstructing an nh, dimensional subspace of the nn. dimensional input..\nWe highlight this per-unit capacity to make the point that a per-parameter task capacity appears to. be the limiting factor in our experiments (e.g. Figure[1|and Figure 3), and not a per-unit capacity. such as the per-unit capacity to remember previous inputs. Thus when comparing results between architectures, one should normalize different architectures by the number of parameters, and nol the number of units, as is frequently done in the literature (e.g. when comparing vanilla RNNs tc. LSTMs). This makes further sense as, for all common RNN architectures, the computational cost of processing a single sample is linear in the number of parameters, and quadratic in the number of. units per layer. As we show in Figure[3d, plotting the capacity results by numbers of units gives very. misleading results.\nWe studied additional tasks that we believed to be easy enough to train that the evaluation loss of different architectures would reveal variations in capacity rather than trainability. A critical aspect of these tasks is that they could not be learned perfectly by any of the model sizes in our experiments As we change model size, we therefore expect performance on the task to also change. The tasks are (see Appendix, section Task Definitions for further elaboration of these tasks):\n1 layer 1 layer All depths (a) (b) paanrrrerr 0.3 4 4 3 0.2 perr perr 2 X 0.1 B 1 : . : : : : : 0.0 0 0 1 103 104 0 1 2 4 5 6 7 8 0 20 40 60 80 100 Number of parameters. Number steps unrolled Number of units per layer rnn relu rnn. ugrnn relu ugrnn. irnn gru +rnn Istm\nare considered. Indeed, both the RNN and UGRNN bits per parameter dropped dramatically to the 3.5 range (Figure2h) when those architectures exclusively used ReLU, providing strong evidence. that the ReLU activation function is problematic for this capacity task..\nThe performance on these two tasks is shown in Figure[3] The evaluation loss as a function of the. number of parameters is plotted in panels a-c and e-g, for the text8 task, and RCF task, respectively. For all tasks in this section, the number of parameters rather than the number of units provided the. bottleneck on performance, and all architectures performed extremely closely for the same number of. parameters. By close performance we mean that, for one model to achieve the same loss as anothe. the model, the number of parameters would have to be adjusted by only a small factor (exemplified in. Figure 1i for the per-parameter capacity task).\nFigure 3: All RNN architectures achieved near identical performance given the same number of parameters, on a language modeling and random function fitting task. (a-c) text8 Wikipedia number of parameters vs bits per character for all RNN architectures. From left to right: 1 layer, 2 layer, 4 layer models. (d) text8 number of hidden units vs bits per character for 1 layer architectures. We note that this is almost always a misleading way to compare architectures as the more heavily gated architectures appear to do better when compared per-unit. (e-g) Same as (a-c), except showing square error for different model sizes trained on RCFs.\nIn practice it is widely appreciated that there is often a significant gap in performance between, for example, the LSTM and the vanilla RNN, with the LSTM nearly always outperforming the vanilla RNN. Our per-parameter capacity results provide evidence for a rough equivalence among a variety of RNN architectures, with slightly higher capacity in the vanilla RNN (Figure|1). To reconcile our per-parameter capacity results with widely held experience, we provide evidence that gated architectures. such as the LSTM. are far easier to train than the vanilla RNN (and often the IRNN)\nWe study two tasks that are difficult to learn: parallel parentheses counting of independent inpu streams, and mathematical addition of integers encoded in a character string (see Appendix, sectior. Task Definitions). The parentheses task is moderately difficult to learn, while the arithmetic task is. quite hard. The results of the HP optimizations are shown in Figure4h4h for the parentheses task. and in Figure|4i-4p for the arithmetic task. These tasks show that, while it is possible for a vanilla. RNN to learn these tasks reasonably well, it is far more difficult than for a gated architecture. Note. that the best achieved loss on the arithmetic task is still significantly decreasing, even after 2500 HP. evaluations (2500 full complete optimizations over the training set). for the RNN and IRNN..\nThere are three noteworthy trends in these trainability experiments. First, across both tasks, and all. depths (1, 2, 4 and 8), the RNN and IRNN performed most poorly, and took the longest to learn the. task. Note, however that both the RNN and IRNN always solved the tasks eventually, at least for depth 1. Second, as the stacking depth increased, the gated architectures became the only architectures thai\ntext8 - 1-step ahead character-based prediction on the text8 wikipedia dataset (100 million characters) (Mahoney2011) Random Continuous Functions (RCF) - A task similar to the per-parameter capacity task above, except the target outputs are real numbers (not categorical), and the number of training samples is held fixed.\n1 layer 2 layers 4 layers 1 layer (a) (b) (c) (d) Beeeee eer aer 2.4 2.4 2.4 2.4 tette 2.2 2.2 2.2 2.2 2.0 2.0 2.0 2.0 1.8 1.8 1.8 1.8 1.6 1.6 1.6 1.6 ..- 1.4 1.4 1.4 1.4 103 104 105 106 107 103 104 105 106 107 103 104 105 106 107 102 103 Number of units (e) (f) (g) 0.8 .. 0.8 0.8 rnn .. 0.6 0.6 0.6 irnn RCr ugrnn 0.4 0.4 0.4 gru 0.2 0.2 0.2 Istm ...: +rnn 0.0 0.0 0.0 103 104 105 106 107 103 104 105 106 107 103 104 105 106 107 Number of parameters Number of parameters Number of parameters\ncould solye the tasks. Third. the most trainable architecture for depth 1 was the GRU. and the mos trainable architecture for depth 8 was the +RNN (which performed the best on both of our metrics for trainability, on both tasks).\nlayer layers 4 layers 8 layers (a) (b) (d) (c 1.5 1.5 1.5 1.5 1.0 1.0 1.0 1.0 0.5 0.5 0.5 0.5 Parrrnsess 8 0.0 0.0 0.0 0.0 100 200 300 400 500 100 200 300 400 500 0 100 200 300 400 500 0 100 200 300 400 500 (e) (f) (g) (h) Mhnn nnnnnnnnns mnnnnns 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0.0 0.0 0.0 0.0 0 100 200 300 400 500 0 100 200 300 400 500 0 100 200 300 400 500 0 100 200 300 400 500 HP Configuration Number. HP Configuration Number HP Configuration Number. HP Configuration Number 1 layer 2 layers 4 layers 8 layers (i) (i) (k) 1) 0.4 0.4 0.4 0.4 sso| 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0.0 0.0 0.0 0.0 0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500 (m) (n) (o) (p) 0.12 0.12 0.12 0.12 SSoI 0.10 eernnneon 0.10 0.10 0.10 0.08 0.08 0.08 0.08 0.06 0.06 mnnnnnm 0.06 0.06 0.04 0.04 0.04 0.04 0.02 0.02 0.02 0.02 0.00 0.00 0.00 0.00 ! 0 500 1000 1500 2000 2500 0 500 1000 15002000 2500 0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500 HP Configuration Number. HP Configuration Number. HP Configuration Number. HP Configuration Number. rnn irnn ugrnn gru Istm +rnn\nFigure 4: Some RNN architectures are far easier to train than others. Results of HP searches on extremely difficult tasks. (a) Median evaluation error as a function of HP optimization iteration for 1 layer architectures on the parentheses task. Dots indicate evaluation loss achieved on that HP iteration. (b-d) Same as (a), but for 2, 4 and 8 layer architectures. (e-h) Minimum evaluation error as a function of HP optimization iteration for parentheses task. Same depth order as (a-d). (i-p) Same as (a-h), except for the arithmetic task. We note that the best loss for the vanilla RNN is still decreasing after 2400+ HP evaluations.\nTo achieve our results on capacity and trainability, we relied heavily on a HP tuner. Most practitioners do not have the time or resources to make use of such a tuner, typically only adjusting the HPs a ew times themselves. So we wondered how the various architectures would perform if we set HPs andomly, within the ranges specified (see Appendix for ranges). We tried this 1o0o times on the parentheses task, for all 200k parameter architectures at depths 1 and 8 (Figure5|and Table|1). The noticeable trends are that the IRNN returned an infeasible error nearly half of the time, and the LSTM\n(depth 1) and GRU (depth 8) were infeasible the least number of times, where infeasibility means that the training loss diverged. For depth 1, the GRU gave the smallest error, and the smallest mediar error, and for depth 8, the +RNN delivered the smallest error and smallest median error.\n1 layer 8 layers (a) (b) 100 100 + 10-1 1 101 1 SsoI + 102 1 10-2 ++ ++ 1 + + 10-3 10-3 1 1 104 104 - 10-5 ! 10-5 1 ! 10-6 10-6 gru irnn Istm rnn ugrnn +rnn gru irnn Istm rnn ugrnn Architecture Architecture\n100 10-1 10-2 10-3 104 10-5 10-6\nFigure 5: For randomly generated hyperparameters, GRU and +RNN are the most easily trainable architectures. Evaluation losses from 1o00 iterations of randomly chosen HP sets for 1 and 8 layer 200k parameter models on the parentheses task. Statistics from a Welch's t-test for equality of means on all pairs of architectures are presented in Table[App.2] (a) Box and whisker plot of evaluation losses for the 1 layer model. (b) Same as (a) but for 8 layers.\nArchitecture % Infeasible (1 layer) % Infeasible (8 layer) +RNN 8.8 % GRU 15.5 % 3.2 % IRNN 56.7 % 44.6 % LSTM 12.0 % 4.0 % RNN 21.5 % 18.7 % UGRNN 20.2 % 11.5 %\nTable 1: Fraction infeasible trials as a result of 1o00 iterations of randomly chosen HP sets for 1 anc 8 layer, 200k parameter models trained on the parentheses task.."}, {"section_index": "8", "section_name": "5 DISCUSSION", "section_text": "Here we report that a number of RNN variants can hold between 3-6 bits per parameter about thei. task, and that these variants can remember a number of random inputs that is nearly equal to the. number of hidden units in the RNN. The quantification of the number of bits per parameter an RNN. can store about a task is particularly important, as it was not previously known whether the amoun. of information about a task that could be stored was even linear in the number of parameters.\nWhile our results point to empirical capacity limits for both task memorization, and input memoriza tion, apparently the requirement to remember features of the input through time is not a practica bottleneck. If it were, then the vanilla RNN and IRNN would perform better than the gated arch tectures in proportion to the ratio of the number of units, which they do not. Based on widesprea esults in the literature, and our own results on our difficult tasks, the loss of some memory capacit and possibly a small amount of per-parameter storage capacity) for improved trainability seem a worthwhile trade off. Indeed, the input memory capacity did not obviously impact any task nc explicitly designed to measure it, as the error curves - for instance for the language modeling task overlapped across architectures for the same number of parameters, but not the same number of unit\nOur result on per-parameter task capacity, about 5 bits per parameter averaged over architectures is in surprising agreement with recently published results on the capacity of synapses in biological neurons. This number was recently calculated to be about 4.7 bits per synapse, based on biologica synapses in the hippocampus having roughly 26 measurable discrete sizes (Bartol et al.2016). Our capacity results have implications for compressed networks that employ quantization techniques. In\nparticular, they provide an estimate of the number of bits which a weight may be compressed withou loss in task performance. Coincidentally, in Han et al.[(2015), the authors used 5 bits per weight ir the fully connected layers.\nAn additional observation about per-parameter task capacity in our experiments is that it increases. for a few time steps beyond one (Figure2b), and then appears to saturate. We interpret this tc. suggest that recurrence endows additional capacity to a network with shared parameters, but thai. there are diminishing returns, and the total capacity remains bounded even as the number of time steps increases.\nWe also note that performance is nearly constant across RNN architectures if the number of parameters is held fixed. This may motivate the design and use of architectures with small compute per parameter. ratios, such as mixture of experts RNNs (Shazeer et al.J2017), and RNNs with large embedding. dictionaries on input and output (Jozefowicz et al.|2016).\nDespite our best efforts, we cannot claim that we perfectly trained any of the models. Potential problems in HP optimization could be local minima, as well as stochastic behavior in the HP optimization as a result of the stochasticity of batching or random draws for weight matrices. We tried to uncover these effects by running the best performing HPs 100 times, and did not observe any serious deviations from the best results (see Table|App.1|in Appendix). Another form of validation comes from the fact that in our capacity task, essentially 3 independent experiments (one for each level of depth) yielded a clustering by architecture (Figure|1).\nDo our results yield a framework for choosing a recurrent architecture? In total, we believe yes. As explored in|Amodei et al.(2015), a practical concern for recurrent models is speed of execution in a production environment. Our results suggest that if one has a large resource budget for training and confined resource budget for inference, one should choose the vanilla RNN. Conversely, if the training resource budget is small, but the inference budget large, one should choose a gated model Another serious concern relates to task complexity. If the task is easy to learn, a vanilla RNN should yield good results. However if the task is even moderately difficult to learn, a gated architecture is the right choice. Our results point to the GRU as being the most learnable of gated RNNs foi shallow architectures, followed by the UGRNN. The +RNN typically performed best for deepe. architectures. Our results on trainability confirm the widely held view that the LSTM is an extremely reliable architecture, but it was almost never the best performer in our experiments. Of course further experiments will be required to fully vet the UGRNN and +RNN. All things considered, in an uncertain training environment, our results suggest using the GRU or +RNN."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014\nPierre Baldi and Santosh S Venkatesh. Number of stable points for spin-glasses and neural networks of higher orders. Physical Review Letters. 58(9):913. 1987\nThomas M Bartol, Cailey Bromer, Justin Kinney, Michael A Chirillo, Jennifer N Bourne, Kristen M Harris and Terrence J Sejnowski. Nanoconnectomic upper bound on the variability of synaptic plasticity. eLife, 4 e10778, 2016.\nAdam S Charles, Han Lun Yap, and Christopher J Rozell. Short-term memory capacity in networks via th restricted isometry property. Neural computation, 26(6):1198-1235, 2014.\nJunyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gatec recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014.\nJakob Foerster, Justin Gilmer, Jan Chorowski, Jascha Sohl-Dickstein, and David Sussillo. Intelligible languag. modeling with input switched affine networks. ICLR 2017 submission, 2016.\nSurya Ganguli, Dongsung Huh, and Haim Sompolinsky. Memory traces in dynamical systems. Proceedings the National Academy of Sciences, 105(48):18970-18975, 2008\nElizabeth Gardner. The space of interactions in neural network models. Journal of physics A: Mathematical ana general, 21(1):257, 1988.\nAlex Graves, Marcus Liwicki, Santiago Fernandez, Roman Bertolami, Horst Bunke, and Jurgen Schmidhuber A novel connectionist system for unconstrained handwriting recognition. Pattern Analysis and Machine Intelligence. IEEE Transactions on. 31(5):855-868. 2009\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXi preprint arXiv:1512.03385, 2015.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735-1780 1997\nHerbert Jaeger and Harald Haas. Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. science, 304(5667):78-80, 2004.\nAndrej Karpathy, Justin Johnson, and Fei-Fei Li. Visualizing and understanding recurrent networks. arXiv preprint arXiv:1506.02078, 2015\nThomas M Cover. Geometrical and statistical properties of systems of linear inequalities with applications ir pattern recognition. IEEE transactions on electronic computers, (3):326-334, 1965.\nKlaus Greff, Rupesh Kumar Srivastava, Jan Koutnik, Bas R Steunebrink, and Jurgen Schmidhuber. Lstm: A search space odyssey. arXiv preprint arXiv:1503.04069, 2015.\nKarol Gregor, Ivo Danihelka, Alex Graves, and Daan Wierstra. Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623, 2015.\nSong Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015.\nItay Hubara, Daniel Soudry, and Ran El Yaniv. Binarized neural networks. arXiv preprint arXiv:1602.02505 2016.\nPascal Koiran and Eduardo D Sontag. Vapnik-chervonenkis dimension of recurrent neural networks. Discret Applied Mathematics, 86(1):63-79, 1998\nValerio Mante, David Sussillo, Krishna V Shenoy, and William T Newsome. Context-dependent computation b recurrent dynamics in prefrontal cortex. Nature, 503(7474):78-84. 2013\nJames Martens and Ilya Sutskever. Learning recurrent neural networks with hessian-free optimization. Ir Proceedings of the 28th International Conference on Machine Learning (ICML-11), pp. 1033-1040, 2011.\nJasper Snoek, Hugo Larochelle, and Ryan P Adams. Practical bayesian optimization of machine learnin. algorithms. In Advances in neural information processing systems, pp. 2951-2959, 2012.\nRupesh Kumar Srivastava, Klaus Greff, and Jurgen Schmidhuber. Highway networks. arXiv preprir arXiv:1505.00387, 2015.\nDavid Sussillo and Omri Barak. Opening the black box: low-dimensional dynamics in high-dimensiona recurrent neural networks. Neural computation, 25(3):626-649, 2013.\nlya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advance in neural information processing systems, pp. 3104-3112, 2014.\nOlivia L White, Daniel D Lee, and Haim Sompolinsky. Short-term memory in orthogonal neural networks Physical review letters, 92(14):148102, 2004.\nWolfgang Maass, Thomas Natschlager, and Henry Markram. Real-time computing without stable states: A new. framework for neural computation based on perturbations. Neural computation, 14(11):2531-2560, 2002.\nTijmen Tieleman and Geoffrey. Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4, 2012.."}, {"section_index": "10", "section_name": "Appendix", "section_text": "We used a HP tuner that uses a Gaussian Process (GP) Bandits approach for HP optimization. Oui setting of the tuner's internal parameters was such that it uses Batched GP Bandits with an expectec improvement acquisition function and a Matern 5/2 Kernel with feature scaling and automatic relevance determination performed by optimizing over kernel HPs. Please see|Desautels et al.(2014 and Snoek et al.(2012) for closely related work.\nFor all our tasks, we requested HPs from the tuner, and reported loss on a validation dataset. Fo the per-parameter capacity task, the evaluation, validation and training datasets were identical. Fo1 text8, the validation and evaluation sets consisted of different sections of held out data. For all othe tasks, evaluation, validation, and training sets were randomly drawn from the same distribution. The performance we plot in all cases is on the evaluation dataset.\nBelow is the list of all tunable HPs that were generically applied to all models. In total, each. RNN variant had between 10 and 27 HP dimensions relating to the architecture, optimization, and regularization.\nAdditionally, the HP tuner was used to optimize HPs associated with learning.\nThe perceptron capacity task also had associated HPs:\ns( - as used in the following RNN definitions, a nonlinearity determined by the HP tuner.. {ReLU, tanh}. The only exception was the IRNN, which used ReLU exclusively.. For any matrix that is inherently square, e.g. wh, there were three possible initializations:. identity, orthogonal, or random normal distribution scaled by 1//nn, with nn the number. of recurrent units. The sole exception was the RNN, which was limited to either orthogonal or random normal initializations, to differentiate it from the IRNN. For any matrix that is inherently rectangular, e.g. Wx, we initialized with a random normal distribution scaled by. 1/nin, with nin the number of inputs.. For all matrix initializations except the identity initialization, there was a multiplicative. scalar used to set the scale of matrix. The scalar was exponentially distributed in [0.01, 2.0. for recurrent matrices and [0.001, 2.0] for rectangular matrices. Biases could have two possible distributions: all biases set to a constant value, or drawn. from a standard normal distribution. For all bias initializations, a multiplicative scalar was drawn, uniformly distributed in. 2.0, 2.0 and applied to bias initialization. We included a scalar bias HP bf'g for architectures that contain forget or update gates, as is. commonly employed in practice, which was uniformly distributed in [0.0, 6.0]..\nThe number of training steps - The exact range varied between tasks, but always fell between 50K and 20M. One of four optimization algorithms could be chosen: vanilla SGD, SGD with momentum, RMSProp (Tieleman & Hinton2012), or ADAM (Kingma & Ba]2014) learning rate initial value, exponentially distributed in [1e-4, 1e-1] learning rate decay - exponentially distributed in [1e-3, 1]. The learning rate exponentially decays by this factor over the number of training steps chosen by the tuner optimizer momentum-like parameter - expressed as a logit, and uniformly distributed in [1.0, 7.0] gradient clipping value - exponentially distributed in [1, 100] 12 decay - exponentially distributed in [1e-8, 1e-3].\nThe number of samples in the dataset, b - between O.1x and 10x the number of mode parameters\nSome optimization algorithms had additional parameters such as ADAM's second order decay rate. or epsilon parameter. These were set to their default values and not optimized. The batch size was sel individually by hand for all experiments. The same seed was used to initialize the random number. generator for all task parameters, whereas the generator was randomly seeded for network parameters. (e.g. initializations). Note that for each network, the initial condition was set to a learned vector..\n106 rnn 105 ugrnn gru Istm +rnn 104 104 105 106 Dataset size\n106 rnn 105 ugrnn gru Istm +rnn 104 104 105 106 Dataset size"}, {"section_index": "11", "section_name": "PERCEPTRON CAPACITY", "section_text": "While at a high-level, for the perceptron capacity task, we wanted to optimize the amount of. information the RNN carried about true random labels, in practice, the training objective was standard. cross-entropy. However, when returning a validation loss to the HP tuner, we returned the mutual\ninformation I (Y; Y|X ). Conceptually, this is as if there is one nested optimization inside another. The inner loop optimizes the RNN for the set of HPs, training cross entropy, but returning mutual information. The outer loop then chooses the HPs, in particular, the number of samples b, in equation (21), so as to maximize the amount of mutual information. This implementation is necessitated because there is no straightforward way to differentiate mutual information with respect to number of samples. During training, cross entropy error is evaluated beginning after 5 time steps."}, {"section_index": "12", "section_name": "RANDOM CONTINUOUS FUNCTION", "section_text": "A dataset was constructed consisting of N = 10 random unit norm Gaussian input vectors x, with. size d = 50. Target scalar outputs y were generated for each input vector, and were also drawn from a\nSo. Targelscalar oulpuls g were generale , where Z was a normalization constant such that the weightings summed to 1, and the characteristic time constant = 5000. The loss function for training was calculated after 50 time steps and was weighted square error on the yi, with the , acting as the weighting terms.\nA HP determined whether the input vector X.; was presented to the RNN only at the firs time step, or whether it was presented at every time step.\nFigure App.1: In the capacity task, the optimal dataset size found by the HP tuner was only slightly larger than the mutual information in bits reported in Figure|1h, for all architectures at all sizes and depths.\nnformation I ( Y; Y|X J. Conceptually, this is as if there is one nested optimization inside another\nIn the Memory Capacity task, we wanted to know how much information an RNN can reconstruct about its inputs at some later time point. We picked an input dimension, 64, and varied the number of parameters in the networks such that the number of hidden units was roughly centered around 64. After 12 time steps the target of the network was exact reconstruction of the input, with a square error loss. The inputs were random values drawn from a uniform distribution between -3 and 3 (corresponding to a variance of 1).\nIn the text8 task, the task was to predict one character ahead in the text8 dataset (1e8 characters of Wikipedia) (Mahoney|2011). Input was a hot-one encoded sequence, as was the output. The loss was cross-entropy loss on a softmax output layer. Rather than use partial unrolling as is common in language modeling, we generated random pointers into the text. The first 13 time steps (where T = 50) were used to initialize the RNN into a normal operating mode, and remaining steps were used for training or inference."}, {"section_index": "13", "section_name": "PARENTHESES COUNTING TASK", "section_text": "The parentheses counting task independently counts the number of opened 'parens', e.g. '(', withor. the closing )'. Here parens is used to mean any of 10 parens type pairs, e.g. '<>' or '[l'. Additionall. there were 10 noise characters, 'a' to j'. For each paren type, there was a 20D + 10D = 30. hot-one encoding of all paren and noise symbols, for a total of 300 inputs. The output for each pare. type was a hot-one encoding of the digits O-9, which represented the count of the opened parens c. hat type. If the count exceeded 9, the the network kept the count at 9, if the paren was closed, tl count decreased. The loss was the sum of cross-entropy losses, one for each paren type. Finally, fc. each paren input stream, 50% random noise characters were drawn, and 50% random paren characte. were drawn, e.g. 10 streams like (a<a<bcb>[[D)'. Parens of other types were treated as noise for th. current type, e.g. for the above string if the paren type was '<>', the answer is '1' at the end. The los. was defined only at the final time point, T, and T = 175.."}, {"section_index": "14", "section_name": "ARITHMETIC TASK", "section_text": "In the arithmetic task, a hot-one encoded character sequence of an addition problem was presented as input to the network, e.g., '-343243+93851= ', and the output was the hot-one encoded answer. including the correct amount of left padded spaces, '-249392'. An additional HP for this task was the number of compute steps (1-6) between the input of the '=' and the first non-space character in the target output sequence. The two numbers in the input were randomly, uniformly selected in -1e7, 1e7]. After 36 time steps, cross-entropy loss was calculated. We found this task to be extremely difficult for the networks to learn, but when the task was learned, certain of the network architectures could perform the task nearly perfectly."}, {"section_index": "15", "section_name": "C HP ROBUSTNESS", "section_text": "We wondered how robust the HPs are to the variability of both random batching of data, and randon initialization of parameters. So we identified the best HPs from the parentheses experiments of 100k parameter, 1 layer architectures, and reran the parameter optimization 100 times. We measurec the number of infeasible experiments, as well as a number of statistics of the loss for the reruns (Table|App.1). These results show that the best HPs yielded a distribution of losses very close to the originally reported loss value..\nTable App.1: Results of 100 runs on the parentheses task using the best HPs for each architecture, a. depth 1. HPs were chosen to be the set which achieved the minimum loss. Table shows original loss.. achieved by the HP tuner, amount of infeasible trials, minimum loss from running 100 iterations o. the same HPs, mean loss, maximum loss, standard deviation, and standard deviation divided by th mean.\nArchitecture Original Infeasible Min Mean Max S.D. S.D./Mean RNN 1.16e-2 0 % 1.41e-2 8.21e-2 0.294 5.22e-2 0.636 IRNN 4.20e-4 48 % 2.24e-4 5.02e-4 8.69e-4 1.35e-4 0.269 UGRNN 1.02e-4 0 % 3.66e-5 2.71e-4 6.06e-3 7.12e-4 2.63 GRU 2.80e-4 1 % 7.66e-5 1.89e-4 5.48e-4 9.08e-5 0.480 LSTM 7.96e-4 0 % 8.10e-4 2.02e-3 0.0145 2.31e-3 1.14\nTable App.2: Results of Welch's t-test for equality of means on evaluation losses of architecture pairs trained on the parentheses task with randomly sampled HPs. 8 layer GRU and UGRNN, IRNN and RNN, and LSTM and RNN pairs have loss distributions that are different with statistical significance (p > O.05). Negative t-statistic indicates that the mean of the second architecture in the pair is large. than the first.\n1 layer 8 layer t-stat df p-value t-stat df p-value +RNN/GRU -23.6 1080 < 0.001 +RNN/IRNN -25.7 954 <0.001 1 +RNN/LSTM -26.1 941 <0.001 +RNN/RNN -25.8 946 < 0.001 +RNN/UGRNN -24.3 1050 < 0.001 GRU/IRNN -7.74 696 < 0.001 -3.51 1360 < 0.001 GRU/LSTM -6.65 1750 < 0.001 -4.84 1290 < 0.001 GRU/RNN -26.5 1340 < 0.001 -3.93 1330 < 0.001 GRU/UGRNN -4.11 1620 < 0.001 -1.13 1840 0.261 IRNN/LSTM 2.23 652 0.0264 -2.04 1250 0.0420 IRNN/RNN -12.7 426 < 0.001 -0.571 1250 0.568 IRNN/UGRNN 4.03 719 < 0.001 2.37 1320 0.0178 LSTM/RNN -19.6 1500 < 0.001 1.53 1730 0.125 LSTM/UGRNN 2.25 1640 0.0247 3.81 1260 < 0.001 RNN/UGRNN 20.7 1210 < 0.001 2.81 1300 0.00498"}] |
BkdpaH9ll | [{"section_index": "0", "section_name": "BOOSTING IMAGE CAPTIONING WITH ATTRIBUTES", "section_text": "Ting Yao, Yingwei Pan, Yehao Li, Zhaofan Qiu, Tao Mei\nAutomatically describing an image with a natural language has been an emerg. ing challenge in both fields of computer vision and natural language processing. In this paper, we present Long Short-Term Memory with Attributes (LSTM-A). - a novel architecture that integrates attributes into the successful Convolutional Neural Networks (CNNs) plus Recurrent Neural Networks (RNNs) image cap-. tioning framework, by training them in an end-to-end manner. To incorporate attributes, we construct variants of architectures by feeding image representations. and attributes into RNNs in different ways to explore the mutual but also fuzzy re. lationship between them. Extensive experiments are conducted on COCO image captioning dataset and our framework achieves superior results when compared to state-of-the-art deep models. Most remarkably, we obtain METEOR/CIDEr-D of 25.2%/98.6% on testing data of widely used and publicly available splits in. (Karpathy & Fei-Fei][2015) when extracting image representations by GoogleNet and achieve to date top-1 performance on COCO captioning Leaderboard.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Accelerated by tremendous increase in Internet bandwidth and proliferation of sensor-rich mobil devices, image data has been generated, published and spread explosively, becoming an indispens able part of today's big data. This has encouraged the development of advanced techniques for a broad range of image understanding applications. A fundamental issue that underlies the success of these technological advances is the recognition (Szegedy et al.]2015) Simonyan & Zisserman 2015, He et al.] [2016). Recently, researchers have strived to automatically describe the content o an image with a complete and natural sentence, which has a great potential impact for instance or robotic vision or helping visually impaired people. Nevertheless, this problem is very challenging as description generation model should capture not only the objects or scenes presented in the image but also be capable of expressing how the objects/scenes relate to each other in a nature sentence.\nThe main inspiration of recent attempts on this problem (Donahue et al.]2015) Vinyals et al.]2015 Xu et al.]2015] [You et al.]2016) are from the advances by using RNNs in machine translatior (Sutskever et al.| 2014), which is to translate a text from one language (e.g., English) to anothe (e.g., Chinese). The basic idea is to perform a sequence to sequence learning for translation, wher an encoder RNN reads the input sequential sentence, one word at a time till the end of the sentence and then a decoder RNN is exploited to generate the sentence in target language, one word at eacl time step. Following this philosophy, it is natural to employ a CNN instead of the encoder RNN fo image captioning, which is regarded as an image encoder to produce image representations.\nWhile encouraging performances are reported, these CNN plus RNN image captioning methods translate directly from image representations to language, without explicitly taking more high-level semantic information from images into account. Furthermore, attributes are properties observed in images with rich semantic cues and have been proved to be effective in visual recognition (Parikh & Grauman2011). A valid question is how to incorporate high-level image attributes into CNN plus RNN image captioning architecture as complementary knowledge in addition to image representa- tions. We investigate particularly in this paper the architectures by exploiting the mutual relationship between image representations and attributes for enhancing image description generation. Specifi- cally, to better demonstrate the impact of simultaneously utilizing the two kinds of representations. we devise variants of architectures by feeding them into RNN in different placements and moments,"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "e.g., leveraging only attributes, inserting image representations first and then attributes or vice vers and inputting image representations/attributes once or at each time step.\nThe main contribution of this work is the proposal of attribute augmented architectures by integrating the attributes into CNN plus RNN image captioning framework, which is a problem not yet full understood in the literature. By leveraging more knowledge for building richer representations anc description models, our work takes a further step forward to enhance image captioning and could have a direct impact of indicating a new direction of vision and language research. More importantly the utilization of attributes also has a great potential to be an elegant solution of generating open vocabulary sentences, making image captioning system really practical.\nThe first direction, template-based methods, predefine the template for sentence generation whicl follows some specific rules of language grammar and split sentence into several parts (e.g., subject verb, and object). With such sentence fragments, many works align each part with image conten and then generate the sentence for the image. Obviously, most of them highly depend on the tem lates of sentence and always generate sentence with syntactical structure. For example, Kulkarn et al. employ Conditional Random Field (CRF) model to predict labeling based on the detecte bjects, attributes, and prepositions, and then generate sentence with a template by filling in slots with the most likely labeling (Kulkarni et al.]2013). Similar in spirit, Yang et al. utilize Hidder Markov Model (HMM) to select the best objects, scenes, verbs, and prepositions with the highes og-likelihood ratio for template-based sentence generation in (Yang et al.|2011). Furthermore, the raditional simple template is extended to syntactic trees in (Mitchell et al.[2012) which also starts from detecting attributes from image as description anchors and then connecting ordered objects with a syntactically well-formed tree, followed by adding necessary descriptive information.\nSearch-based approaches \"generate\" sentence for an image by selecting the most semantically sim. ilar sentences from sentence pool or directly copying sentences from other visually similar images. This direction indeed can achieve human-level descriptions as all sentences are from existing human generated sentences. The need to collect human-generated sentences, however, makes the sentenc. pool hard to be scaled up. Moreover, the approaches in this dimension cannot generate novel de. scriptions. For instance, in (Farhadi et al.||2010), an intermediate meaning space based on the triple of object, action, and scene is proposed to measure the similarity between image and sentence. where the top sentences are regarded as the generated sentences for the target image. Ordonez et al (Ordonez et al.2011) search images in a large captioned photo collection by using the combinatio1. of object, stuff, people, and scene information and transfer the associated sentences to the query. image. Recently, a simple k-nearest neighbor retrieval model is utilized in (Devlin et al.|[2015) anc. the best or consensus caption is selected from the returned candidate captions, which even perform. as well as several state-of-the-art language-based models..\nDifferent from template-based and search-based models, language-based models aim to learn the probability distribution in the common space of visual content and textual sentence to generate novel sentences with more flexible syntactical structures. In this direction, recent works explore such probability distribution mainly using neural networks for image captioning. For instance, in (Vinyals et al.f2015), Vinyals et al. propose an end-to-end neural networks architecture by utilizing LSTM to generate sentence for an image, which is further incorporated with attention mechanism in (Xu et al.[2015) to automatically focus on salient objects when generating corresponding words. More recently, in (Wu et al.|[2016), high-level concepts/attributes are shown to obtain clear improvements on image captioning when injected into existing state-of-the-art RNN-based model and such visual attributes are further utilized as semantic attention in (You et al.]2016) to enhance image captioning.\nIn short, our work in this paper belongs to the language-based models. Different from most of the aforementioned language-based models which mainly focus on sentence generation by solely\ndepending on image representations (Donahue et al.]2015] Kiros et al.]2014} Mao et al.]201 Vinyals et al.|2015f[Xu et al.2015) or high-level attributes (Wu et al.2016), our work contribute by studying not only jointly exploiting image representations and attributes for image captionin but also how the architecture can be better devised by exploring mutual relationship in between. is also worth noting that (You et al.][2016) also additionally involve attributes for image captioning Ours is fundamentally different in the way that (You et al.2016) is as a result of utilizing attribute to model semantic attention to the locally previous words, as opposed to holistically employin attributes as a kind of complementary representations in this work."}, {"section_index": "3", "section_name": "3.1 PROBLEM FORMULATION", "section_text": "Suppose we have an image I to be described by a textual sentence S, where S = {w1, w2, ..., wn.}. consisting of Ns words. Let I E RD and wt E RDs denote the D,-dimensional image repre-. sentations of the image I and the Ds-dimensional textual features of the t-th word in sentence S. respectively. Furthermore, we have feature vector A E RDa to represent the probability distribution. over the high-level attributes for image I. Specifically, we train the attribute detectors by using the. weakly-supervised approach of Multiple Instance Learning (MIL) in (Fang et al.]2015) on image. captioning benchmarks. For an attribute wa, one image I is regarded as a positive bag of regions (in. stances) if wa exists in image I's ground-truth sentences, and negative bag otherwise. By inputting. all the bags into a noisy-OR MIL model, the probability of the bag b1 which contains attribute wa is. measured on the probabilities of all the regions in the bag as.\nwhere pWa is the probability of the attribute wa predicted by region r; and can be calculated througl a sigmoid layer after the last convolutional layer in the fully convolutional network. In particular, th dimension of convolutional activations from the last convolutional layer is x x h and h represent the representation dimension of each region, resulting in x x response map which preserves th spatial dependency of the image. Then, a cross entropy loss is calculated based on the probabilitie of all the attributes at the top of the whole architecture to optimize MIL model. With the learnt MII model on image captioning dataset, we treat the final image-level response probabilities of all th attributes as A.\nInspired by the recent successes of probabilistic sequence models leveraged in statistical machine translation (Bahdanau et al. 2015 Sutskever et al.2014), we aim to formulate our image captioning models in an end-to-end fashion based on RNNs which encode the given image and/or its detected attributes into a fixed dimensional vector and then decode it to the target output sentence. Hence. the sentence generation problem we explore here can be formulated by minimizing the following energy loss function as\nSince the model produces one word in the sentence at each time step, it is natural to apply chain rule to model the joint probability over the sequential words. Thus, the log probability of the sentence is. given by the sum of the log probabilities over the word and can be expressed as.\nNs log Pr(S|I, A) = )logPr(wtI, A,wo,...,wt- t=1\nBy minimizing this loss, the contextual relationship among the words in the sentence can be guar anteed given the image and its detected attributes.\nIn this paper, we devise our CNN plus RNN architectures to generate descriptions for images under. the umbrella of additionally incorporating the detected high-level attributes. Specifically, we be-. gin this section by presenting the problem formulation and followed by five variants of our image captioning frameworks with attributes.\nPrwa Wa =1- (1Pi wa ri EbI\nE(I, A,S) = -logPr (S|I, A)\nLSTM 4 W2 WNs 4 4 LSTM LSTM LSTM LSTM Attributes 0 W1 W2 WN, Attributes WO W1 WN-1 2 LSTM LSTM LSTM LSTM LSTM Image A 5 Image Attributes W2 WNs WO W1 WN-1 4 LSTM STm LSTM LSTM LSTM LSTM Image WO W1 WN-1 Attributes Image Attributes\nFigure 1: Five variants of our LSTM-A framework (better viewed in color)\nWe formulate this task as a variable-length sequence to sequence problem and model the parametric. distribution Pr (wt|I, A, wo, ..., wt-1) in Eq.(3) with Long Short-Term Memory (LSTM), which. is a widely used type of RNN. The vector formulas for a LSTM layer forward pass are summarized as below. For time step t, x' and h' are the input and output vector respectively, T are input weights. matrices, R are recurrent weight matrices and b are bias vectors. Sigmoid and hyperbolic tangent are element-wise non-linear activation functions. The dot product of two vectors is denoted with O. Given inputs xt, ht-1 and ct-1, the LSTM unit updates for time step t are:.\ngt = $(Tgx + Rght-1+ bg), it =o(T;x+R;ht-1+bi) ft =o(Tgxt+Ryht-1+bf), ct =gtOit + ct-1O ft ot =o(T,xt+R,ht-1+ bo), ht = (ct) o ot\nwhere gt, it, ft, c', ot, and ht are cell input, input gate, forget gate, cell state, output gate, and ce. output of the LSTM, respectively."}, {"section_index": "4", "section_name": "3.2 LONG SHORT-TERM MEMORY WITH ATTRIBUTES", "section_text": "Unlike the existing image captioning models in (Donahue et al.]2015] [Vinyals et al.]2015) whicl. solely encode image representations for sentence generation, our proposed Long Short-Term Mem. ory with Attributes (LSTM-A) model additionally integrates the detected high-level attributes intc LSTM. We devise five variants of LSTM-A for involvement of two design purposes. The first pur. pose is about where to feed attributes into LSTM and three architectures, i.e., LSTM-A1 (leveraging. only attributes), LSTM-A2 (inserting image representations first) and LSTM-A3 (feeding attributes first), are derived from this view. The second is about when to input attributes or image represen tations into LSTM and we design LSTM-A4 (inputting image representations at each time step. and LSTM-A5 (inputting attributes at each time step) for this purpose. An overview of LSTM-A. architectures is depicted in Figure|1"}, {"section_index": "5", "section_name": "3.2.1 LSTM-A1 (LEVERAGING ONLY ATTRIBUTES)", "section_text": "x-1 = TqA, xt=T,wt, tE{0,...,Ns-1} and h=f(xt), tE{0,...,N-1\nGiven the detected attributes, one natural way is to directly inject the attributes as representations at. the initial time to inform the LSTM about the high-level attributes. This kind of architecture with only attributes input is named as LSTM-A1. It is also worth noting that the attributes-based model. in (Wu et al.|[2016) is similar to LSTM-A1 and can be regarded as one special case of our LSTM-A. Given the attribute representations A and the corresponding sentence W = [wo, w1, ..., ws], the. LSTM updating procedure in LSTM-A1 is as\nwhere De is the dimensionality of LSTM input, Ta E RDex Da and Ts E RDeDs is the transfor- mation matrix for attribute representations and textual features of word, respectively, and f is the\nupdating function within LSTM unit. Please note that for the input sentence W = [wo, ..., ws] we take wo as the start sign word to inform the beginning of sentence and wv. as the end sign word which indicates the end of sentence. Both of the special sign words are included in our vo- cabulary. Most specifically, at the initial time step, the attribute representations are transformed as the input to LSTM, and then in the next steps, word embedding x' will be input into the L- STM along with the previous step's hidden state ht-1. In each time step (except the initial step). we use the LSTM cell output ht to predict the next word. Here a softmax layer is applied after the LSTM layer to produce a probability distribution over all the Ds words in the vocabulary as (T(wt+1) nt 1 exp D(w):\nTo further leverage both image representations and high-level attributes in the encoding stage of our LSTM-A, we design the second architecture LSTM-A2 by treating both of them as atoms in the input sequence to LSTM. Specifically, at the initial step, the image representations I are firstly transformed into LSTM to inform the LSTM about the image content, followed by the attribute representations A which are encoded into LSTM at the next time step to inform the high-level attributes. Then LSTM decodes each output word based on previous word x' and previous step's hidden state ht-1 which is similar to the decoding stage in LSTM-A1. The LSTM updating procedure in LSTM-A2 is designed as\nx-2=T,I and x-1=TaA, xt=T,wt, tE{0,..,Ns-1} and h=fx), tE{0,...,N-1}\nThe third design LSTM-A3 is similar to LSTM-A2 as both designs utilize image representations and high-level attributes to form the input sequence to LSTM in the encoding stage, except that the orders of encoding are different. In LSTM-A3, the attribute representations are firstly encoded into LSTM and then the image representations are transformed into LSTM at the second time step. The whole LSTM updating procedure in LSTM-A3 is as.\nx-2=TqA and x-1=T,I, xt=T,wt, tE{0,...,Ns-1} and h=fxt), tE{0,...,Ns-1}\nDifferent from the former three designed architectures which mainly inject high-level attributes and image representations at the encoding stage of LSTM, we next modify the decoding stage ir our LSTM-A by additionally incorporating image representations or high-level attributes. More specifically, in LSTM-A4, the attribute representations are injected once at the initial step to inforn the LSTM about the high-level attributes, and then image representations are fed at each time step as an extra input to LSTM to emphasize the image content frequently among memory cells in LSTM Hence, the LSTM updating procedure in LSTM-A4 is:\nx-1 = TaA, xt=T,wt+T,I, tE{0,...,Ns-1} and h=fxt), tE{0,...,Ns-1}\nx-1 = TI, xt=Tswt+TaA, tE{0,...,Ns-1} and ht=fxt), tE{0,...,Ns-1}\nThe last design LSTM-A5 is similar to LSTM-A4 except that it firstly encodes image representations. and then feeds attribute representations as an additional input to LSTM at each step in decoding. stage to emphasize the high-level attributes frequently. Accordingly, the LSTM updating procedure in LSTM-A5 is as"}, {"section_index": "6", "section_name": "4.1 DATASET", "section_text": "The dataset, COCO, is the most popular benchmark for image captioning, which contains 82,783 training images and 40,504 validation images. There are 5 human-annotated descriptions per image As the annotations of the official testing set are not publicly available, we follow the widely used settings in prior works (You et al.]2016, Zhou et al.]2016) and take 82,783 images for training. 5,000 for validation and 5,000 for testing.\nData Preprocessing. Following (Karpathy & Fei-Fei2015), we convert all the descriptions ir training set to lower case and discard rare words which occur less than 5 times, resulting in the fina vocabulary with 8.791 unique words in COCO dataset.\nFeatures and Parameter Settings. Each word in the sentence is represented as \"one-hot' vector. (binary index vector in a vocabulary). For image representations, we take the output of 1,024-way. pool5/7 7_s1 layer from GoogleNet (Szegedy et al. 2015) pre-trained on Imagenet ILSVRC12 dataset (Russakovsky et al.[2015). For attribute representations, we select 1,000 most common words on COCO as the high-level attributes and train the attribute detectors with MIL model (Fang. et al.|2015) purely on the training data of COCO, resulting in the final 1,000-way vector of proba-. bilities of attributes. The dimension of the input and hidden layers in LSTM are both set to 1,024..\nImplementation Details. We mainly implement our image captioning models based on Caffe (Jia et al.|[2014), which is one of widely adopted deep learning frameworks. Specifically, with an initial learning rate 0.01 and mini-batch size set 1,024, the objective value can decrease to 25% of the initial loss and reach a reasonable result after 50,o00 iterations (about 123 epochs).\nTesting Strategies. For sentence generation in testing stage, there are two common strategies. One is to choose the word with maximum probability at each time step and set it as LSTM input for next time step until the end sign word is emitted or the maximum length of sentence is reached. The other strategy is beam search which selects the top-k best sentences at each time step and considers them as the candidates to generate new top-k best sentences at the next time step. We adopt the second strategy and the beam size k is empirically set to 3.\nMoreover, to avoid model-level overfitting, we utilize ensembling strategy to fuse the prediction results of 5 identical models as previous works (Vinyals et al.2015] You et al.[2016). Please note that all the 5 identical models are trained with different initializations separately..\nEvaluation Metrics. For the evaluation of our proposed models, we adopt five metric BLEU@ N (Papineni et al.]2002), METEOR (Banerjee & Lavie] 2005), ROUGE-L (Lin]2004 CIDEr-D (Vedantam et al.l[2015) and SPICE (Anderson et al.J2016). All the metrics are compute by using the codesreleased by COCO Evaluation Server (Chen et al.2015).\nTo verify the merit of our LSTM-A models, we compared the following state-of-the-art methods\n(1) NIC & LSTM (Vinyals et al.] 2015): NIC attempts to directly translate from image pixels tc natural language with a single deep neural network. The image representations are only injected. into LSTM at the initial time step. We directly extract the results reported in (You et al.]2016) and. name this run as NIC. Furthermore, for fair comparison, we also include one run LSTM which is our implementation of NIC.\nhttps://github.com/tylin/coco-captior\nTable 1: Performance of our proposed models and other state-of-the-art methods on COCO, where B@ N M. R. C and S are short for BLEU@ N, METEOR, ROUGE-L, CIDEr-D and SPICE scores. All values are reported as percentage (%).\n(5) Sentence-Condition (SC) (Zhou et al.2016): Sentence-condition is proposed most recently anc exploits text-conditional semantic attention to generate semantic guidance for sentence generatior by conditioning image features on current text content. (6) MSR Captivator (Devlin et al.]2015): MSR Captivator employs both Multimodal Recurren Neural Network (MRNN) and Maximum Entropy Language Model (MELM) (Fang et al.]2015 for sentence generation. Deep Multimodal Similarity Model (DMSM) (Fang et al.2015) is furthe exploited for sentence re-ranking.\n(7) CaptionBot (Tran et al. 2016): CaptionBot is a publicly image captioning system|which is mainly built on vision models by using Deep residual networks (ResNets) (He et al.||2016) to detect visual concepts, MELM (Fang et al.]2015) language model for sentence generation and DMSM (Fang et al.] 2015) for caption ranking. Entity recognition model for celebrities and landmarks is. further incorporated to enrich captions and the confidence scoring model is finally utilized to select. the output caption."}, {"section_index": "7", "section_name": "4.4 PERFORMANCE COMPARISON", "section_text": "Performance on COCO Table 1 shows the performances of different models on COCO image captioning dataset. It is worth noting that the performances of different approaches here are based on different image representations. Specifically, VGG architecture (Simonyan & Zisserman|2015) is utilized as image feature extractor in the methods of Hard-Attention & Soft-Attention and Sentence- Condition, while GoogleNet (Szegedy et al.2015) is exploited in NIC, LRCN, ATT, LSTM and our LSTM-A. In view that the GoogleNet and VGG features are comparable, we compare directly with results. Overall, the results across eight evaluation metrics consistently indicate that our proposed LSTM-A exhibits better performance than all the state-of-the-art techniques including non-attention models (NIC, LSTM, LRCN) and attention-based methods (Hard-Attention, Soft-Attention, ATT Sentence-Condition). In particular, the CIDEr-D can achieve 98.6%, which is to date the high-\nModel B @1 B @2 B @3 B @4 M R C s NIC (Vinyals et al.) 2015 66.6 45.1 30.4 20.3 LRCN (Donahue et al.]2015) 69.7 51.9 38.0 27.8 22.9 50.8 83.7 15.8 HA [Xu et al.]2015) 71.8 50.4 35.7 25 23 SA (Xu et al.j 2015 70.7 49.2 34.4 24.3 23.9 ATT (You et al.||2016 70.9 53.7 40.2 30.4 24.3 1 - SC (Zhou et al.). 2016 72 54.6 40.4 29.8 24.5 95.9 - LSTM (Vinyals et al. 2015] 68.4 51.2 38 28.4 23.1 50.7 84.3 16 LSTM-A1 72.3 55.8 42 31.7 24.9 53.3 96 17.8 LSTM-A2 72.8 56.4 42.7 32.2 25 53.5 97.5 18 LSTM-A3 73.1 56.4 42.6 32.1 25.2 53.7 98.4 18.2 LSTM-A4 71.1 54.5 40.9 30.6 24 52.5 90.6 16.8 LSTM-A5 73 56.5 42.9 32.5 25.1 53.8 98.6 18.2 LSTM-A* 95.7 82.5 68.5 55.9 34.1 67.3 150.5 26.8\nTable 3: The user study using two criteria: M1 - percentage of captions generated by different methods that are evaluated as better or equal to human caption and M2 - percentage of captions that pass the Turing Test.\nest performance reported on COCO dataset when extracting image representations by GoogleNet. LSTM-A1 inputting only high-level attributes as representations makes the relative improvemen- t over LSTM which feeds into image representations instead by 11.6%, 7.8%, 5.1%, 13.9% and. 11.25% in BLEU@4, METEOR, ROUGR-L, CIDEr-D and SPICE, respectively. The results basi- cally indicate the advantage of exploiting high-level attributes than image representations for im- age captioning. Furthermore, by additionally incorporating attributes to LSTM model, LSTM-A2,. LSTM-A3 and LSTM-A5 lead to a performance boost, indicating that image representations and attributes are complementary and thus have mutual reinforcement for image captioning. Similar in spirit, LSTM-A4 improves LRCN by further taking attributes into account. There is a significant performance gap between ATT and LSTM-A5. Though both runs involve the utilization of image. representations and attributes, they are fundamentally different in the way that the performance of ATT is as a result of modulating the strength of attention on attributes to the previous words, and. LSTM-A5 is by employing attributes as auxiliary knowledge to complement image representations. This somewhat reveals the weakness of semantic attention model, where the prediction errors will. accumulate along the generated sequence.\nPerformance on COCO online testing server. We also submitted our best run in terms of ME. TEOR, i.e., LSTM-A3, to online COCO testing server and evaluated the performance on officia. testing set. Table2[shows the performance Leaderboard on official testing image set with 5 refer. ence captions (c5) and 40 reference captions (c40). Please note that here we utilize the outputs o 2,048-way pool5 layer from ResNet-152 as image representations and train the attribute detectors. by ResNet-152 in our final submission. Only the latest top-3 performing methods which have beer. officially published are included in the table. Compared to the top performing methods, our pro. posed LSTM-A3 achieves the best performance across all the evaluation metrics on both c5 and c4(. testing sets, and ranks the first on the Leaderboard..\nTable 2: Leaderboard of the published state-of-the-art image captioning models on the online COCO testing. server (http: //mscoco.org/dataset/#captions-1eaderboard), where B@N, M, R, and C are. hort for BLEU@ N. METEOR. ROUGE-L. and CIDEr-D scores. All values are reported as percentage (%).\nB@1 B@2 B@3 B@4 M R C Model c5 c40 c5 c40 c5 c40 c5 c40 c5 c40 c5 c40 c5 c40 MSM@MSRA (LSTM-A3 ) 75.1 92.6 58.8 85.1 44.9 75.1 34.3 64.6 26.6 36.1 55.2 70.9 104.9 105.3 ATT [You et al.2016 73.1 90 56.5 81.5 42.4 70.9 31.6 59.9 25 33.5 53.5 68.2 94.3 95.8 Google [Vinyals et al. 2015] 71.3 89.5 54.2 80.2 40.7 69.4 30.9 58.7 25.4 34.6 53 68.2 94.3 94.6 MSR Captivator Devlin et al.. 2015] 71.5 90.7 54.3 81.9 40.7 71 30.8 60.1 24.8 33.9 52.6 68 93.1 93.7\nVz - percentage or Capllons thal pass the Human LSTM-A3 CaptionBot LSTM LRCN M1 62.8 58.2 49.2 43.9 M2 90.1 72.2 66.3 57.3 55.9\nCompared to LSTM-A1, LSTM-A2 which is augmented by integrating image representations per forms better, but the performances are lower than LSTM-A3. The results indicate that LSTM-A3, in comparison, benefits from the mechanism of first feeding high-level attributes into LSTM instead of starting from inserting image representations in LSTM-A2. The chance that a good start point can be attained and lead to performance gain is better. LSTM-A4 feeding the image representations at each time step yields inferior performances to LSTM-A3, which only inputs image representations once. We speculate that this may because the noise in the image can be explicitly accumulated and hus the network overfits more easily. In contrast, the performances of LSTM-A5 which feeds at tributes at each time step show the improvements on LSTM-A3. The results demonstrate that the igh-level attributes are more accurate and easily translated into human understandable sentence. A- mong the five proposed LSTM-A architectures, LSTM-A3 achieves the best performances in terms of BLEU@1 and METEOR, while LSTM-A5 performs the best in other six evaluation metrics. The performances of the oracle run LSTM-A* could be regarded as the upper bound of employing at tributes in our framework and lead to large performance gain against LSTM-A3. Such an uppei bound enables us to obtain more insights on the factor accounting for the success of the curren attribute augmented architecture and also provides guidance to future research in this direction More specifically, the results, on one hand, indicate the advantage and great potential of leverag ing attributes for boosting image captioning, and on the other, suggest that more efforts are further required towards mining and representing attributes more effectively.\nLSTM: a cake sitting on top of a table 0.8 #EOS a = irthday cake with candles on it sitting top of table 0.7 0.6 0.5 0.4 02 0.1 a cake sitting on top of a table #EOS Attributes: LSTM-A3:a birthday cake with candles on it candles: 1 birthday: 0.999 1.0 - #EOS a birthday cake with candles on it sitting top of table lit: 0.999 0.9 0.8 cake: 0.982 0.7 table: 0.456 0.6 - 0.5 it: 0.366 0.4 sitting: 0.356 0.1 0.0 birthday cake with candles on it #EOS LSTM: a dog that is laying down on the ground 0.9 #EOs a brown and white dog holding yellow frisbee that is laying down on the ground 0.8 0.7 0.6 0.5 0.4 0.3 0.1 0.0 dog that is laying down on the ground #EOS LSTM-A3: a brown and white dog holding a yellow frisbee Attributes: dog: 1 #EOs a brown and white dog holding yellow frisbee that is laying down on the ground 0.9 yellow: 0.989 0.8 frisbee: 0.873 0.7 0.6 white: 0.652 brown:0.601 0.4 leash: 0.417 0.3 sidewalk: 0.393 0.2 0.1 0.0 a brown and white dog holding a yellow frisbee #EOS\nFigure 2: Visualization of prediction changes w.r.t. the additional attribute inputs. The attributes are predicted by MIL method in (Fang et al.2015) and the output sentences are generated by LSTM and LSTM-A3\nGround Truth: a pile of stuffed animals hanging from the ceiling of Ground Truth: a group of children sitting at a table eating pieces of cake a store LSTM-A: a little girl sitting at a table eating food. LSTM-A: a bunch of stuffed animals in a room LSTM-A2: a bunch of stuffed animals in a store. LSTM-A:a group of people sitting around a table eating cake LSTM-Ag: a bunch of stuffed animals hanging from a ceiling. LSTM-A3:a group of children sitting around a table eating cake LSTM-A4: a little girl sitting at a table with a cake. LSTM-A4:a group of people standing next to each other LSTM-As: a bunch of stuffed animals hanging from a ceiling LSTM-As: a group of children sitting at a table eating food. Ground Truth:some brown cabinets a table light and refrigerator Ground Truth: elephants walk in a line down the middle of the street. LSTM-A: a kitchen with a stove and a counter LSTM-A: a group of people riding elephants down a street. LSTM-A: a kitchen with a table and a stove LSTM-A: a group of people walking down a street. LSTM-A: a kitchen with a wooden table and a refrigerator. LSTM-A:a group of elephants walking down a street LSTM-A4:a kitchen with a stove and a stove LSTM-A4: a group of people riding horses down a street. LSTM-As: a kitchen with a stove and a refrigerator. LSTM-As: a group of elephants walking down a street. Ground Truth:many people behind a stand selling bananas Ground Truth: a man sitting on a couch holding a wii remote control. LSTM-A: a man standing in front of a bunch of bananas. LSTM-A: a man sitting on a couch in a living room. LSTM-A: a group of people standing around a bunch of bananas. LSTM-A: a man sitting on a couch with a laptop LSTM-Ag: a group of people standing around a table with bananas. LSTM-Ag: a man sitting on a couch with a remote LSTM-A4: a group of people standing around a table. LSTM-A4: a woman sitting on a couch with a laptop LSTM-As: a group of people standing around a table with bananas. LSTM-As:a man sitting on a couch with a remote\nTo better understand how satisfactory are the sentences generated from different methods, we also. conducted a human study to compare our LSTM-A3 against three approaches, i.e., CaptionBot, L-. RCN and LSTM. A total number of 12 evaluators (6 females and 6 males) from different education. backgrounds, including computer science (4), business (2), linguistics (2) and engineering (4), are. invited and a subset of 1,ooo images is randomly selected from testing set for the subjective evalu-. ation. The evaluation process is as follows. All the evaluators are organized into two groups. We. show the first group all the four sentences generated by each approach plus the five human-annotated. sentences and ask them the question: Do the systems produce captions resembling human-generated. sentences? In contrast, we show the second group once only one sentence generated by different. approach or human annotation and they are asked: Can you determine whether the given sentence. has been generated by a system or by a human being? From evaluators' responses, we calculate twc metrics: 1) M1: percentage of captions that are evaluated as better or equal to human caption; 2) M2: percentage of captions that pass the Turing Test. Table 3[lists the result of the user study. Overall. our LSTM-A3 is clearly the winner for all two criteria. In particular, the percentage achieves 62.8%. and 72.2% in terms of M1 and M2, respectively, making the absolute improvement over the best. competitor CaptionBot by 4.6% and 5.9%."}, {"section_index": "8", "section_name": "4.6 OUALITATIVE ANALYSIS", "section_text": "Visualization of prediction changes w.r.t. the additional attribute inputs. Figure[2|shows two image examples to illustrate the word prediction changes with respect to the additional attribute\nAttributes: Generated Sentences: Ground Truth: LSTM: a group of people on a boat in the water boat: 1 water: 0.92 river: 0.645 small: 0.606 an image of a man in a boat with a dog CaptionBot: I think it's a man with a small boat in a boats: 0.562 dog: 0.555 body: 0.527 a person on a rowboat with a dalmatian dog on the boat floating:0.484 body of water. ? old woman rowing a boat with a dog LSTM-A3: a man and a dog on a boat in the water Attributes: Generated Sentences: Ground Truth: bananas: 1 market: 0.995 outdoor: 0.617 LSTM: a group of people standing around a market 1) bunches of bananas for sale at an outdoor market CaptionBot: I think it's a bunch of yellow flowers. a person at a table filled with bananas bunch: 0.553 table: 0.51 flowers: 0.454 LSTM-A3: a group of people standing around a bunch 3 there are many bananas layer across this table at a people: 0.431 yellow: 0.377 of bananas farmers market Generated Sentences: Ground Truth: Attributes: man: 0.669 herd: 0.583 standing: 0.496 LSTM: a man riding a skateboard down a street 1 a man walks while a large number of sheep follow CaptionBot: I think it's a group of people walking down a man leading a herd of sheep down the sheep animals: 0.493 walking: 0.471 cows: 0.427 the road. 3) the man is walking a herd of sheep on the road street: 0.427 road: 0.414 LSTM-A3: a man walking down a road with a sheep through a town Attributes: Generated Sentences: Ground Truth: computer: 0.764 keyboard: 0.748 LSTM: a cell phone sitting on top of a table a smart phone being held up in front of a lap top screen: 0.627 holding: 0.558 person: 0.515 CaptionBot: I think it's a laptop that is on the phone. the person is holding his cell phone while on his laptop phone: 0.434 hand: 0.404 remote: 0.395 LSTM-A3: a person holding a cell phone in their hand someone holding a cell phone in front of a laptop Attributes: Generated Sentences: Ground Truth: LSTM: a group of people flying kites in the sky ) a plane with water skies for landing gear coming in flying: 0.877 plane: 0.598 airplane: 0.528 CaptionBot: I think it's a plane is flying over the water. for a landing at a lake. lake: 0.495 water: 0.462 sky: 0.443 LSTM-A3: a red and white plane flying over a body of a plane flying through a sky above a lake red: 0.426 small: 0.365 water a red and white plane is flying over some water Attributes: Generated Sentences: Ground Truth: snow: 0.996 standing: 0.841 zebra: 0.828 LSTM: a zebra is standing in a dirt area one zebra standing in snow near a stone wall enclosure: 0.629 zoo: 0.492 ground: 0.490 CaptionBot: I think it's a zebra is standing in the snow. ? a zebra is standing in a snowy field area: 0.417 walking: 0.322 LSTM-A3: a zebra standing in the snow near a fence ? a zebra stands in snow in front of a wall Attributes: Generated Sentences: Ground Truth: holding: 0.958 walking: 0.725 people: 0.493 LSTM: a man walking down a street holding a surfboard a couple of kids walking with umbrellas in their hands umbrella:0.548 woman:0.421 CaptionBot: I am not really confident, but I think it's a two women walking side by side holding umbrellas person: 0.361 standing: 0.289 playing: 0.271 man walking down a sidewalk holding an umbrella. ? an image of two girls walking with umbrellas LSTM-A3: a group of people walking down a street holding umbrellas. Attributes: Generated Sentences: Ground Truth: traffic: 0.746 sign: 0.690 street: 0.555 LSTM:a street sign that is on a pole CaptionBot: I think it's a sign hanging off the side of a a traffic light in front of some business signs light: 0.446 signs: 0.374 building: 0.344 building. ? a traffic light near a store front stop: 0.313 pole: 0.296 LSTM-A3: a street sign with a traffic light on it 3) a traffic light atop a post in a business district\nFigure 4: Attributes and sentences generation results on COCO. The attributes are predicted by MIL method in (Fang et al.2015) and the output sentences are generated by 1) LSTM, 2) CaptionBo, 3) our LSTM-A3, and 4) Ground Truth: randomly selected three ground truth sentences..\ninputs. Take the first image as an example, the predicted subject is \"a cake\"' in LSTM model. By additionally incorporating the detected attributes, e.g., \"candles\"' and \"birthday,\" the output subject in the sentence by our LSTM-A3 changes into \"a birthday cake with candles, demonstrating the advantage of the auxiliary attribute inputs.\nSentence generation comparison between five LSTM-A architectures. The examples of sen tence generated by our five LSTM-A architectures are further illustrated in Figure [3] In general the sentences generated by LSTM-A3 and LSTM-A5 are very comparable and more accurate thar those by LSTM-A1, LSTM-A2 and LSTM-A4. For instance, LSTM-A3 and LSTM-A5 produce the sentence of \"a bunch of stuffed animals hanging from a ceiling,' which describes the first image very precisely and finely.\nSentence generation comparison across different approaches. Figure 4 showcases a few sen tence examples generated by different methods, the detected high-level attributes, and human- annotated ground truth sentences. From these exemplar results, it is easy to see that all of these automatic methods can generate somewhat relevant sentences, while our proposed LSTM-A3 can predict more relevant keywords by jointly exploiting high-level attributes and image representations for image captioning. For example, compared to subject term \"a group of people\"' and \"a man\"' in the sentence generated by LSTM and CaptionBot respectively, \"a man and a dog\" in our LSTM-A3 is more precise to describe the image content in the first image, since the keyword \"dog\"' is one of the detected attributes and directly injected into LSTM to guide the sentence generation. Similar- ly, verb term \"holding\" which is also detected as one high-level attribute presents the fourth image more exactly. Moreover, our LSTM-A3 can generate more descriptive sentence by enriching the semantics with high-level attributes. For instance, with the detected adjective \"red,' the generated sentence \"a red and white plane flying over a body of water\"' of the fifth image depicts the image content more comprehensive."}, {"section_index": "9", "section_name": "4.7 ANALYSIS OF THE BEAM SIZE k", "section_text": "In order to analyze the effect of the beam size k in testing stage, we illustrate the performances of our two top performing architectures LSTM-A3 and LSTM-A5 with the beam size in the range of {1, 2, 3, 4, 5} in Figure 5] To make all performances fall into a comparable scale, all scores are normalized by the highest score of each evaluation metric. As shown in Figure[5] we can see thai\nFigure 5: The effect of beam size k on (a) LSTM-A3 and (b) LSTM-A5\n0.98 0.98 0.96 0.96 0.94 0.94 0.92 0.92 0.9 0.9 0.88 0.88 BLEU@1 BLEU@2 BLEU@3 BLEU@4 METEOR ROUGE-L CIDEr-D BLEU@1 BLEU@2 BLEU@3 BLEU@4 METEOR ROUGE-L CIDEr-D 1234 5 1 234 5 (a) k for LSTM-A3 (b) k for LSTM-A5\nalmost all performances in terms of each evaluation metric are like the \"A'' shapes when beam size k varies from 1 to 5. Hence, we set the beam size k as 3 in our experiments, which can achieve the. best performance with a relatively small beam size.."}, {"section_index": "10", "section_name": "DISCUSSIONS AND CONCLUSIONS", "section_text": "We have presented Long Short-Term Memory with Attributes (LSTM-A) architectures which ex plores both image representations and high-level attributes for image captioning. Particularly, we. study the problem of augmenting high-level attributes from images to complement image represen tations for enhancing sentence generation. To verify our claim, we have devised variants of archi-. tectures by modifying the placement and moment, where and when to feed into the two kinds of. representations. Experiments conducted on COCO image captioning dataset validate our proposal. and analysis. Performance improvements are clearly observed when comparing to other captioning. techniques and more remarkably, the performance of our LSTM-A to date ranks the first on COCO. image captioning Leaderboard.\nOur future works are as follows. First, more attributes will be learnt from large-scale image bench marks, e.g., YFCC-1ooM dataset, and integrated into image captioning. We will further analyze the impact of different number of attributes involved. Second, how to generate free-form and open vocabulary sentences with the learnt attributes is also expected."}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning tc align and translate. In ICLR, 2015.\nJacob Devlin, Hao Cheng, Hao Fang, Saurabh Gupta, Li Deng, Xiaodong He, Geoffrey Zweig, and Margare Mitchell. Language models for image captioning: The quirks and what works. In ACL, 2015..\nAli Farhadi, Mohsen Hejrati, Mohammad Amin Sadeghi, Peter Young, Cyrus Rashtchian, Julia Hockenmaie. and David Forsyth. Every picture tells a story: Generating sentences from images. In ECCV, 2010.\nPeter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. Spice: Semantic propositional image caption evaluation. In ECCV, 2016\nXinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, and C Lawrence Zitnick. Microsoft COCO captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325 2015.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. II CVPR, 2016.\nYangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadar. rama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. In MM, 2014.\nAndrej Karpathy and Li Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In CVPR 2015\nRyan Kiros, Ruslan Salakhutdinov, and Rich Zemel. Multimodal neural language models. In ICML. 2014\nGirish Kulkarni, Visruth Premraj, Vicente Ordonez, Sagnik Dhar, Siming Li, Yejin Choi, Alexander C Berg. and Tamara L Berg. Babytalk: Understanding and generating simple image descriptions. IEEE Trans. on PAMI, 2013.\nChin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In ACL Workshop, 2004\nTsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollar, anc C Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014..\nJunhua Mao, Wei Xu, Yi Yang, Jiang Wang, and Alan L. Yuille. Explain images with multimodal recurren neural networks. In NIPs Workshop on Deep Learning, 2014.\nDevi Parikh and Kristen Grauman. Relative attributes. In ICCV. 2011\nChristian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In CVPR, 2015\nKenneth Tran, Xiaodong He, Lei Zhang, Jian Sun, Cornelia Carapcea, Chris Thrasher, Chris Buehler, and Chri Sienkiewicz. Rich image captioning in the wild. arXiv preprint arXiv:1603.09016, 2016..\nRamakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. Cider: Consensus-based image description evaluation. In CVPR, 2015.\nOriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural image captio. generator. In CVPR, 2015. Qi Wu, Chunhua Shen, Lingqiao Liu, Anthony Dick, and Anton van den Hengel. What value do explicit higl level concepts have in vision to language problems? In CVPR, 2016. Kelvin Xu\nYezhou Yang, Ching Lik Teo, Hal Daume III, and Yiannis Aloimonos. Corpus-guided sentence generation o natural images. In EMNLP, 2011."}] |
Hy0L4t5el | [{"section_index": "0", "section_name": "TREE-STRUCTURED VARIATIONAL AUTOENCODER", "section_text": "Richard Shin?\nDepartment of Electrical Engineering and Computer Science University of California, Berkeley.\nGoogle Research, Google Brain, Google DeepMind {alemi, geoffreyi, vinyals}@google.com"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "These techniques have led to significant advances in modeling images, text, sounds, and other kind of complicated data. Language modeling with sequential neural models have halved perplexit (roughly, the error at predicting each word) compared to n-gram methods (Jozefowicz et al.| 2016 Neural machine translation using sequence-to-sequence methods have closed half of the gap in qual ity between prior machine translation efforts and human translation (Wu et al.]2016). Generativ image models have similarly progressed such that they can generate samples largely indistinguish able from the original data, at least for relatively small and simple images (Gregor et al.]2015) 2016 Kingma et al.]2016] Salimans et al.2016] [van den Oord et al.[[2016), although the quality of th model here is harder to measure in an automated way (Theis et al.||2015).\nHowever, many kinds of data we might wish to model are naturally structured as a tree. Computer program code follows a rigorous grammar, and the usual first step in processing it involves parsing it into an abstract syntax tree, which simultaneously discards aspects of the code irrelevant to the semantics such as whitespace and extraneous parentheses, and makes it more convenient to further interpret, analyze, or manipulate. Statements made in formal logic similarly have a hierarchical structure, which determines arguments to predicates and functions, the scoping of variables and quantifiers, and the application of logical connectives. Natural language sentences also contain a latent syntactic structure which is necessary for determining the meaning of the sentence, although\nMajority of work done while at Google"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Many kinds of variable-sized data we would like to model contain an interna ierarchical structure in the form of a tree, including source code, formal logica tatements, and natural language sentences with parse trees. For such data it is atural to consider a model with matching computational structure. In this work. ve introduce a variational autoencoder-based generative model for tree-structured lata. We evaluate our model on a synthetic dataset, and a dataset with applications. o automated theorem proving. By learning a latent representation over trees, ou. nodel can achieve similar test log likelihood to a standard autoregressive decoder out with the number of sequentially dependent computations proportional to th lepth of the tree instead of the number of nodes in the tree.\nA significant amount of recent and ongoing work has explored the use of neural networks for. modeling and generating various kinds of data. Newer techniques like the variational autoen- coder (Rezende et al.| 2014), Kingma & Welling2013) and generative-adversarial networks (Good- fellow et al.[2014) enable training of graphical models where the likelihood function is a com-. plicated neural network which normally makes it infeasible to specify and optimize the marginal distribution analytically. Another family of techniques involves choosing an ordering of the dimen-. sions of the data (which is particularly natural for sequences such as sentences) and training a neural. network to estimate the distribution over the value of the next dimension given all the dimensions we have observed so far.\nIn this paper, we explore how we can adapt the variational autoencoder to modeling tree-structured data. In general, it is possible to treat the tree as a sequence and then use a sequential model. How ever, a model which follows the structure of the tree may better capture long-range dependencies recurrent models sometimes have trouble learning to remember and use information from the distant past when it is relevant to the current context, but these distant parts of the input may be close tc each other within the tree structure.\nIn our proposed approach, we decide the identity of each node in the tree by using a top-down. recursive neural network, causing the distributed representation which decides the identity of each node in the tree to be computed as a function of the identity and relative location of its parent nodes\nBy using the architecture of the variational autoencoder (Rezende et al.|2014) Kingma & Welling 2013), our model can learn to capture various features of the trees within continuous latent variables which are added as further inputs into the top-down recursive neural network and conditions the overall generation process. These latent variables allow us to generate different parts of the tree in parallel; specifically, given a parent node n and its children c1 and c2, the generation of (the distribution placed over different values of) c1 and its descendants is independent of the generation of C2 and its descendants (and vice versa), once we condition upon the latent variables. By structuring the model this way, while our model generates one dimension (the identity of each node within the tree) of the data a time, it is not autoregressive as the probability distribution for one dimension is not a function of the previously generated nodes.\nWe evaluate our model on a variety of datasets, some synthetic and some real. Our experimental results show that it achieves comparable test set log likelihood to autoregressive sequential models which do not use any latent variables, while offering the following properties:\nRecursive neural nets, which processes a tree in a bottom-up way, have been popular in natural lan guage processing for a variety of tasks, such as sentiment analysis (Socher et al.|2013), question answering (Iyyer et al.]2014), and semantic relation extraction (Socher et al.2012). Starting from the leaves of the tree, the model computes a representation for each node by combining the represen- tations of its child nodes. In the case of natural language processing, each tree typically represents one sentence, with the leaf nodes corresponding to the words in the sentence and the structure of the internal nodes determined by the constituency parse tree for the sentence.\nIf we restrict ourselves to binary trees (given that it is possible to binarize arbitrary trees in a lossless way), the we compute the k-dimensional representation rn E Rk for a node n by combining the representations of nodes nteft and nright:\nrn = f(Wrnen+Vrnight)\nVariations and extensions of this approach specify more complicated relationships between rn and the childen's representations rnuet and rnrisht, or allow internal nodes to have variable numbers of chil- dren. For example, Tai et al.[(2015) extend LSTMs to tree-structured models by dividing the vector\nFor balanced trees, generation requires O(log n) rather than O(n) timesteps required for a. sequential model because the children of each node can be generated in parallel.. . It is straightforward to resample a subtree while keeping the other parts of the tree intact. The generated trees are syntactically valid by construction.. . The model produces a latent representation for each tree, which may prove useful in other applications.\nwhere W and V are square matrices (in Rdd) and f is a nonlinear activation function, applied. elementwise. Leaf nodes are represented by embedding the content of the leaf node into a d- dimensional vector, by specifying a lookup table from words to embedding vectors for instance.\nNeural network models which generate or explore a tree top-down have received less attention, bu have been applied to generation and parsing tasks.Zhang et al.(2015) generate natural language sentences along with their dependency parses simultaneously. In their specification of a depen dency parse tree, each word is connected to one parent word and has a variable number of left anc right children (a depth-first in-order traversal on this tree recovers the original sentence). Thei model generates these trees by conditioning the probability of generating each word on its ancestoi words and the previously-generated sibling words using various LSTMs.Dyer et al.[(2016) generate sentences jointly with a corresponding constituency parse tree. They use a shift-reduce parser ar chitecture where shift is replaced with word-generation action, and so the sentence and the tree car be generated by performing a sequence of actions corresponding to a depth-first pre-order traversa of the tree. Each action in the sequence is predicted based upon the tree constructed so far, the words (the tree's terminal nodes) generated so far, and the previous actions performed.Dong & Lapata[(2016) generate tree-structured logical forms using LSTMs, where the LSTM state branches along with the tree's structure; they focus on generating these logical forms when conditioned upor a natural-language description of it.\nThe variational autoencoder (Kingma & Welling2013) Rezende et al.]2014), or VAE for short, pro vides a way to train a generative model with a fixed prior p(z) and a neural network used to specify pe(x z). Typically, the prior p(z) is taken to be a standard multivariate normal distribution (mear at O) with diagonal unit covariance. Naively, in order to optimize log p(x), we need to compute the following integral:\nwhich can be tractable when pe(x z) is simple but not when we want to use a neural network tc represent it. Inference of the posterior p(z x) also becomes intractable..\nInstead, we learn a second neural network qo(z | x) to approximate the true posterior, and use th following variational bound:\nlogp(x) > -DkL(qo(zx)[p(z))+Eq. o(z|x)[l0gPe(xz]\nwhere DkL represents the Kullback-Leibler divergence between the two distributions. Given that. we represent qs(z x) with a neural network which outputs the mean and diagonal covariance for a normal distribution, we can analytically compute the KL divergence term and then use the. reparameterization trick:\nEqo(z|x)[logPe(x|z)] =Ep (e) logpe(xz= +: e)\nwhere p(e) is a standard multivariate normal distribution, and and o are outputs of the neura network implementing qo(zx).\nThese two techniques combined allow us to compute stochastic gradients (by sampling e, treating it as constant, and backpropagating through the model) and use standard neural network training. techniques (such as SGD, Adagrad. and Adam) to train the model..\nAnother interpretation of the variational autoencoder follows from a modification of the regula autoencoder, where we would like to learn a mapping x -> z from the data to a more compact. representation z, and an inverse mapping z -> x. In the VAE, we replace the deterministic x -> z with a probabilistic q(z x), and as a form of regularization, we ensure that this distribution is close. to a prior p(z).\nIn this section, we describe how we combine the variational autoencoder and recursive neural ne works in order to build our model..\nlog pe(x) = log pe(xz)p(z)dz\nFigure 1: An example tree for 1 + 2 - 3. The binary operators are represented with non-terminal nodes, with two required children \"left'' and \"right'. The numbers are terminal nodes\nWe consider arbitrarily-branching typed trees where each node contains a type, and either chila nodes or a terminal value. Each type may be a terminal type or a non-terminal type; nodes oi terminal type contain a value, and nodes of non-terminal type have (zero or more) child nodes.\nFor each terminal type, we have a list of values that a node of this type can have. We also have a list of types that the root node can have.\nAs an example, consider the for loop in the C programming language. A node representing this contains three singular children: an initializer expression, the condition expression (evaluated to check whether the loop should continue running), and the iteration statement, which runs at the end of each loop iteration. It also has repeated children, one child per statement in the loop body."}, {"section_index": "3", "section_name": "3.2 BUILDING A TREE", "section_text": "Now that we have specified the kinds of trees we consider, let us look at how we might build one First. we describe the basic building block that we use to create one node, then we will look at how to compose these together to build an entire tree\nAssume that we know the node that we are about to construct should have type T, and that we have a hidden state h E Rk which contains further information about the node.\nType: Plus Left Right Type: Number Type: Minus Value: 1 Left Right Type: Number Type: Number Value: 2 Value: 3\nType: Plus Left Right Type: Number Type: Minus Value: 1 Left Right Type: Number Type: Number Value: 2 Value: 3\nA non-terminal type T comes with a specification for how many children a node NT of type T should have, and the types permissible for each child location. We distinguish three types of child. nodes:\nNT may have some number of singular child nodes. For the ith singular child, we specify SINGULARCHILD(T, i) = {T1,:.. ,Tn} as the set of types that child node can. have. If the singular child node is optional, we denote this by including in this set.. S1NGULARCHILDCoUNT(T) gives the number of singular child nodes in T.. NT may have an arbitrary number of repeated child nodes. Each repeated child node must. have type belonging within REPEATEDCHILDREN(T) = {T1, :.. }. If this set is empty, no. repeated child nodes are allowed. These children may be of different types..\nThe above specification serves as an extension of context-free grammars, which are commonly used to specify formal languages. The main difference is in optional and repeated children, which makes it easier to specify an equivalent grammar with fewer non-terminal types..\nIf T is a terminal type, we use WHICHTeRMINALVALUET(h), producing a probability distribution over the set of possible values, and sample from this distribution to choose the. value. If T is a non-terminal type, we use the following procedure GeNERATENoDE(T, h):. 1. Compute m = SINGULARCHILDCOUNT(T) + 1{REPEATEDCHILDREN(T) 0} In other words, count the number of singular children, and add 1 if the type allows. repeated children."}, {"section_index": "4", "section_name": "3.3 ENCODING A TREE", "section_text": "Recall that the variational autoencoder framework involves training two models simultaneously: p(x z), the generative (or decoding) model, and q(z x), the inference (or encoding) model. The previous section described how we specify p(x | z), so we now turn our attention to g(z | x)\nOverall, we build the inference model by inverting the flow of data in the generative model. Specif ically, we use ENcoDE(n) to encode a node n with type T:\nThus hroot = EncoDe(nroot) gives a summary of the entire tree as a k-dimensional embedding. We. then construct q(z x) = N(, ) where = Whroot and o = softplus(W,hroot). Applying. softplus(x) = log(1 + e*) as a nonlinearity gives us a way to ensure that o is positive as required."}, {"section_index": "5", "section_name": "3.4 IMPLEMENTING SPLIT. MERGE. AND WHICH FUNCTIONS", "section_text": "CoMBINE. We can consider SPLIT : Rk -> Rk ... Rk and MeRGE : Rk ... Rk -> R functions to be specializations of a more general function ComBINE : Rk ... Rk -> Rk ... R which takes m inputs and produces n outputs.\n2. Compute g1, : : : , gm = SPLITT(h). The function SPLITT(h) : Rk -> Rk ... Rk maps the k-dimensional vector h into n separate k-dimensional vectors g1 to gm. 3. For each singular child i: (a) Sample T, ~ WHIchCHILDTyPET.i(gi) from a distribution over the types in REQUIREDCHILD(T, i). (b) If T; 0, use GeNERATENODE(T;, gi) to build the child node recursively 4. If T specifies repeated children: (a) Compute gcur, gnext = SPLITREPEATEDT(gm). (b) Sample s ~ STOpRePEATT(gcur) from a Bernoulli distribution. If s = 1, stop generating repeated children. (c) Sample Tchild ~ WHICHCHILDTYPET,repeated(Scur), a probability distribution over the types in REPEATEDCHILDREN(T). (d) Use GENERATENODE(Tchild, gcur) to build this child recursively (e) Set gm := = gnext and repeat this loop.\nFor building the entire tree starting from the root, we assume that we have an embedding z which encodes information about the entire tree (we describe how we obtain this in the next section). We sample Troot ~ WhichRooTTypE(z), the type for the root node, and then run GENERATENODE(Troot, Z).\nIf T is a terminal type, return EmBEDDING(v) E Rk by performing a lookup of the con tained value v within a table.. If T is a non-terminal type:. 1. Compute gi = EncoDe(ni) for each singular child n; of n. If ni is missing, then. 9i = 0. 2. If T specifies repeated children, set grepeated := 0 and nchild to the last repeated child. of n, and then run: (a) Compute gchild = ENCODE(nchild) (b) Set grepeated := MERGEREPEATEDT(Jrepeated, Jchild) E Rk. (c) Move nchild to the previous repeated child, and repeat (until we run out of repeated. children).\nIn the previous two sections, we described how the model traverses the tree bottom-up to produce an encoding of it (q(z x)), and how we can generate a tree top-down from the encoding (p(x z)) In this section, we explicate the details of how we implemented the SpLIT, MeRGE, and WHicH functions that we used previously.\nA straightforward implementation of ComBiNE is the following:\ny1 yn] = f(W [x1 Xm]+bJ\nwhere we have taken x; and y; to be column vectors Rk, [x1 .. . Xm] stacks the vectors x; vertically, W E Rn-kxm-k and b E Rn-k are the learned weight matrix and bias vector respectively, and f is a nonlinearity applied elementwise.\nFor WhicH : Rk -> Rd, which computes a probability distribution over d choices, we use a special- ization of CombiNE with one input and one (d-sized rather than k-sized) output, and use softmax as the nonlinearity f .\nWhile this basic implementation sufficed initially, we discovered that two modifications led to bette. performance, which we describe subsequently\n[y1 .... yn|=f(W x1 Xm+b\nSyn] = o(Wgu [x] (x1 y1 =(W X g(xm,y1) =o(Wqmx1 Sxmyn. Xm\nWe initialized bg, = 1 and bgy = -1 so that gy. would start out as a small value and g(x,ys) woul be large, encouraging copying of the inputs x; to the outputs yi.\nLayer normalization.We found that using layer normalization (Ba et al.]2016) also helps sta bilize the learning process. For our model, it is difficult to use batch normalization because the connections of each layer (the functions MeRGE, SpLIT, WHIcH) occur at variable points accord ing to the particular tree we are considering.\nFollowing the procedure in the appendix of Ba et al. (2016), we replace each instance of f (W x1 Xm]+ b) with f(LN(Wx1;Q1) +.. :+ LN(WmXm;Qm) + b) where Wi E Rnkk are horizontal slices of W and a E R are learned multiplicative constants. We use LN(z;) = : (z - )/o where E R is the mean of z E Rk and o E R is the standard deviation of z."}, {"section_index": "6", "section_name": "3.5 WEIGHT SHARING", "section_text": "In the above model. each function with a different name has different weights. For example, if we have two types PLUS and MINUs each with two required children, then SPLITPLus and SPLITMInus will have different weights even though they both have the same signature Rk -> IRk Rk\nHowever, this may be troublesome when we have a very large number of types, because in this scheme the amount of weights increases linearly with the number of types. For such cases, we can apply some of the following modifications:\nGating. We added a form of multiplicative gating, similar to those used in Gated Recur-. rent Units (Chung et al.]2014), Highway Networks (Srivastava et al.]2015), and Gated Pixel CNN (van den Oord et al.||2016). The multiplicative gate enables the ComBINE function to more easily pass through information in the inputs to the outputs if that is preferable to transforming the input. Furthermore, the multiplicative interactions used in computing the gates may help the neural. network learn more complicated functions.."}, {"section_index": "7", "section_name": "3.6 VARIABLE-SIZED LATENT STATE", "section_text": "In order to achieve low reconstruction error Eq(z|x) [log pe(x | z)], the encoder and decoder net works must learn how to encode all information about a tree in z and then be able to reproduce the tree from this representation. If the tree is large, it becomes a difficult optimization problem to learr how to do this effectively, and may require higher-capacity networks in order to succeed at all whicl would require more time to train.\nInstead, we can encode the tree with a variable number of latent state vectors. For each node n; in the tree, we specify q(zn.. x) = N(n,n) where\nThen when computing GeNERATENoDE(T, h), we first sample Zn, ~ q(Zn | x) at training time or zn. ~ p(z) at generation time, and then use h = MeRGELATENT(h, zn.) in lieu of h\nWe fixed the prior of each latent vector z, to be the standard multivariate normal distribution with diagonal unit covariance, and did not investigate computing the prior as a function of other samples of z, as in|Chung et al.[(2015) orFraccaro et al.(2016) which also used a variable number of latent state vectors."}, {"section_index": "8", "section_name": "4.2 SYNTHETIC ARITHMETIC DATA", "section_text": "To evaluate the performance of our models in a controlled way, we created a synthetic dataset con sisting of arithmetic expressions of a given depth which evaluate to a particular value.\nGrammar. We have two non-terminal types, PLUs and Minus, and one terminal type NumBER PLus and Minus have two required children, left and right, each of which can be any of PLus MiNUs, or NumBER. For NumBER, we allowed terminal values 0 to 9..\nGenerating the data. We generate trees with a particular depth, defined as the maximal distance. from the root to any terminal node. As such, we consider 1 + (2 + 3) and (1 + 2) - (3 + 4) to. both have depth 3. To get trees of depth d which evaluate to v. we first sampled 1.0o0.o00 trees\nReplace all instances of SpLITT : Rk > Rk ... Rk and SpLITREPEATED with a single SPLITREC : Rk -> Rk Rk. We can apply SPLITREC recurrently to get the desired number of child embeddings. Similarly, replace instances of MeRGE with MERGEREC. Share weights across the WhicH functions: a WhicH function which produces a distribu tion over T1,..., Tn contains weights and a bias for each T;. We can share these weight. and biases across all WhicH functions where T; appears.\nid W Pni ENCODE(n) softplus W Oni\nFor purposes of comparison, we implemented a standard LSTM model for generating each node of. the tree sequentially with a depth-first traversal, similar to|Vinyals et al. (2015). The model receives. each non-terminal type, and terminal value, as a separate token. We begin the sequence with a special (BOs) token. Whenever an optional child is not present, or at the end of a sequence of repeated children, we insert (END). This allows us to unambiguously specify a tree following a. given grammar as a sequence of tokens.\nAt generation time, we keep track of the partially-generated tree in order to only consider those to kens which would be syntactically allowed to appear at that point. We also tried using this constraint at training time: when computing the output probability distribution, only consider the syntactically allowed tokens and leave the unnormalized log probabilities of the others unconstrained. However,. we found that for our datasets, this did not help much with performance and led to overfitting..\nNumber of nodes. Tree, no VAE Tree VAE Tree VAE (var. latent) Sequential Depth Mean Min Max log p(x) log p(x) log p(x) ~ log p(x) log p(x) ~ log p(x) 5 15 11 19 -28.26 -27.03 -26.85 -27.02 -26.86 -25.21 7 58 39 75 -106.06 -82.08 -80.19 -95.32 -92.68 -74.81 9 206 187 251 -332.66 -331.03 -330.68 -331.12 -330.78 -330.75 11 710 641 1279 -1172.96 -1169.85 -1169.44 -1404.18\nTable 2: Statistics for first-order logic proof clauses, and log likelihoods of models trained on them See Table[1 for more information about the column names.\nuniformly at random from all binary tree structures up to depth d - 1, and randomly assigning eacl. non-terminal node to PLus or Minus and setting each terminal node to a random integer betweei. 0 and 9. Then we randomly pick two such trees, which when combined with PLUs or Minus evaluate to v to build a tree of depth d..\nAs training data, we generated 100,o00 trees of depth 5, 7, 9, and 11. Within each set of trees, each. quarter evaluates to -10, 5, 5, and 10 respectively. We use a test set of 1,024 trees, which we generated by first sampling a new set of 1,000,o00 subtrees independently.\nResults. Table [1shows statistics on the datasets and the experimental results we obtained from training various models. The tree variational autoencoder model achieves better performance on deeper trees. In particular, the sequential model fails to learn well on depth 11 trees. However it appears that a tree-structured model but with a fixed z performs similarly, although consistently. worse than with the VAE."}, {"section_index": "9", "section_name": "4.3 FIRST-ORDER LOGIC PROOF CLAUSES", "section_text": "We next consider a dataset derived from[Alemi et al.(2016): fragments of automatically-generated proofs for mathematical theorems stated in first-order logic. An automated theorem prover tries to prove a hypothesis given some premises, producing a series of steps until we conclude the hypothesis. follows from the premises. Many theorem provers work by resolution; it negates the hypothesis and. shows that a contradiction follows. However, figuring out the intermediate steps in the proof is a. highly nontrivial search procedure. If we can use machine learning to generate clauses which are. likely to appear as a proof step, we may be able to speed up automated theorem proving significantly..\nGrammar. We generate first-order logic statements which are clauses, or a disjunction of literals. Each literal either contains one predicate invocation or asserts that two expressions are equal. Each predicate invocation contains a name, which also determines a fixed number of arguments; each. argument is an expression. An expression is either a function invocation, a number, or a variable. A. function invocation is structurally identical to a predicate invocation..\nWe consider the set of functions and predicates to be closed. Furthermore, given that each function and predicate has a fixed number of arguments, we made each of these its own type in the grammar. To avoid having a very large number of weights as a consequence, we applied the modifications. described in Section|3.5\nResults. Table2ldescribes our results on this dataset. We trained on 955,529 trees and again tested. on 1,024 trees. The sequential model demonstrates slightly better log likelihood compared to the. tree variational autoencoder model. However, on this dataset we observe a significant improvement\nTable 1: Statistics of the synthetic arithmetic datasets, and log likelihoods of models trained on. them. To estimate a tighter bound for log p(x), we use IWAE (Burda et al.2015) with 50 samples. of z. \"Tree. no VAE'' means there was no encoder; instead, we learned a fixed z for all trees.\nNumber of Number of nodes Tree, no VAE Tree VAE Sequential Functions Predicates Mean Min Max log p(x) log p(x) log p(x) ) ~ log p(x) 6798 3140 15 1 2455 -57.74 -33.53 -30.52 -29.22\nin log likelihood by adding the variational autoencoder to the tree model, unlike on the arithmetic datasets.\nConditioning on an outside context. In many applications for modeling tree-structured data, we have an outside context that informs which trees are more likely than others. For example, wher generating clauses which may appear in a proof, the hypothesis in question greatly influences the content of the clauses. We leave this question to future work\nScaling to larger trees.( Currently, training the model requires processing an entire tree at once,. first to encode it into z and then to decode z to reproduce the original tree. This can blow up the memory requirements, requiring an undesirably small batch size. For autoregressive sequence models, truncated backpropagation through time provides a workaround as a partial form of the objective function can be computed on arbitrary subsequences. In our case, adapting methods from. Gruslys et al.(2016) and others may prove necessary.\nImproving log likelihood. In terms of log likelihood, our model performed significantly better than an autoregressive sequential model only on one of the datasets we tested, and about the same or. slightly worse on the others. Adding depth to the tree structure (Irsoy & Cardie]2014), and a more sophisticated posterior (Rezende & Mohamed2015] Kingma et al.2016] Sonderby et al.[2016). are some modifications which might help with learning a more powerful model. Introducing more. dependencies between the dimensions of x during generation is another possibility but one which may reduce the usefulness of of the latent representation (Bowman et al.,2015)."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Alex A Alemi, Francois Chollet, Geoffrey Irving, Christian Szegedy, and Josef Urban. Deepmath deep sequence models for premise selection. arXiv preprint arXiv:1606.04442, 2016\nJunyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of. gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014\nIan Goodfellow. Jean Pouget-Abadie. Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor. mation Processing Systems. pp. 2672-2680. 2014\nKarol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623. 2015.\nSamuel R Bowman. Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Ben gio. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349. 2015.\nYuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. arXiv preprint arXiv:1509.00519, 2015\nMohit Iyyer, Jordan L Boyd-Graber, Leonardo Max Batista Claudino, and Richard Socher. A neural network for factoid question answering over paragraphs. In EMNLP 2014, 2014.\nDanilo J Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approx imate inference in deep generative models. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pp. 1278-1286, 2014.\nTim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen Improved techniques for training gans. arXiv preprint arXiv:1606.03498, 2016.\nRichard Socher, Brody Huval, Christopher D Manning, and Andrew Y Ng. Semantic composi tionality through recursive matrix-vector spaces. In Proceedings of the 2012 Joint Conferenc on Empirical Methods in Natural Language Processing and Computational Natural Languag Learning, pp. 1201-1211. Association for Computational Linguistics, 2012.\nKai Sheng Tai, Richard Socher, and Christopher D Manning. Improved semantic representations from tree-structured long short-term memory networks. arXiv preprint arXiv:1503.00075. 2015\nLucas Theis, Aaron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. arXiv preprint arXiv:1511.01844, 2015.\nRafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016.\nAaron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Ko ray Kavukcuoglu. Conditional image generation with pixelcnn decoders. arXiv preprint arXiv:1606.05328, 2016.\nXingxing Zhang, Liang Lu, and Mirella Lapata. Top-down tree long short-term memory networks. arXiv preprint arXiv:1511.00060, 2015.\nYonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google's neural machine trans lation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.\n-log p(x) KL divergence Reconstruction loss 105 105 600 (1) (1) (1) 500 (2) 104 (2) (2) (3) (3) (3) 104 400 (4) 103 (4) (4) 300 102 103 200 101 100 102 100 0 0 30000 60000 0 30000 60000 0 30000 60000\nFigure 2: Metrics of the same model trained with different KL-related hyperparameters. The x axis is the step count. (1): We annealed the KL cost weight from step 5000 to 25000. (2): We set the KL cost minimum () to 150. (3): We annealed the KL cost weight from step 20000 to 40000. (4): We annealed the KL cost weight from 5000 to 25000 and also set the minimum to 150.."}, {"section_index": "11", "section_name": "4 MODEL HYPERPARAMETERS", "section_text": "We used the Adam optimizer with a learning rate of 0.01, multiplied by O.98 every 10000 steps. We clipped gradients to have L2 norm 3. For the synthetic arithmetic data, we used a batch size of 64; for the first-order logic proof clauses, we used a batch size of 256. For the tree-structured variational autoencoder, we set k = 256 and used the ELU nonlinearity wherever another one was not explicitly specified. For the sequential models, we used two stacked LSTMs each with hidden state size 256. no dropout. We always unrolled the network to the full length of the sequence during training, and did not perform any bucketing of sequences by length."}, {"section_index": "12", "section_name": "KL DIVERGENCE DYNAMICS DURING TRAINING", "section_text": "Optimizing the variational autoencoder objective turned out to be a significant optimization chal lenge, as pointed out by prior work (Bowman et al. 2015, Sonderby et al. 2016}Kingma et al. 2016). Specifically, it is easy for the KL divergence term DkL(q(z | x) || p(z)) to collapse to zero, which means that qo(z x) is equal to the prior and does not convey any information about x. This leads to uninteresting latent representations and reduces the generative model to one that does not use a latent representation at all.\nAs explained by|Kingma et al.(2016), this phenomenon occurs as at the beginning of training it is much easier for the optimization process to move qo(z x) closer to the prior p(z) than to improve p(x z), especially when qo(z x) has not yet learned how to convey any useful information. To combat this, we use a combination of two techniques described in the previous work:.\n1To anneal from a to b, we used o (step - a+b) /10 to weight the KL cost as a function of the number of optimization steps taken.\nAnneal the weight on the KL cost term slowly from 0 to 1. Similar to|Bowman et al.(2015] our schedule was a shifted and horizontally-scaled sigmoid function. Set a floor on the KL cost, i.e. use -max(DkL(qo(z x)||p(z)),) instead of. DkL(qs(z x)[p(z)) in the objective (Kingma et al.]2016). This change means thai. the model receives no penalty for producing a KL divergence below X, and as the other part. of the objective (the reconstruction term) benefits from a higher KL divergence, it naturally. learns a more informative qo(z x) at least in KL divergence..\nWe found that at least one of these techniques were required to avoid collapse of the KL divergence to 0. However, as shown in Figure 2] we found that different combinations of these techniques could led to different overall results, suggesting that finding the desired equilibrium necessitates a hyperparameter search."}] |
BJFG8Yqxl | [{"section_index": "0", "section_name": "GROUP SPARSE CNNS FOR QUESTION SENTENCE CLASSIFICATION WITH ANSWER SETS", "section_text": "Mingbo Ma & Liang Huar Department of EECS Oregon State University Corvallis, OR 97331, USA\nMingbo Ma & Liang Huang\nClassifying question sentences into their corresponding categories is an important task with wide applications, for example in many websites' FAQ sections. How- ever, traditional question classification techniques do not fully utilize the well- prepared answer data which has great potential for improving question representa- tion and could lead to better classification performance. In order to encode answer information into question representation, we first introduce novel group sparse au- toencoders which could utilize the group information in the answer set to refine question representation. We then propose a new group sparse convolutional neu- ral network which could naturally learn the question representation with respect to their corresponding answers by implanting the group sparse autoencoders into the traditional convolutional neural network. The proposed model show significant improvements over strong baselines on four datasets."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Question classification has applications in question answering (QA), dialog systems, etc., and has. been increasingly popular in recent years. Most existing approaches to this problem simply use. existing sentence modeling frameworks and treat questions as general sentences, without any special treatment. For example, several recent efforts employ Convolutional Neural Networks (CNNs) to achieve remarkably strong performance in the TREC question classification task as well as other. sentence classification tasks such as sentiment analysis (Kim]2014) Kalchbrenner et al.2014) Ma et al.2015).\nWe argue, however, that the general sentence modeling frameworks neglect several unique proper-. ties in question classification not found in other sentence classification tasks (such as sentimental classification or sarcasm detection). which we detail below:.\nBing Xiang & Bowen Zhou"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "The categories for most sentence classification tasks are flat and coarse (notable exceptions. such as the Reuters Corpus RCV1 (Lewis et al.[|2004) notwithstanding), and in many cases, even binary (i.e. sarcasm detection). However, question sentences commonly belong to multiple categories, and these categories often have a hierarchical (tree or DAG) structure. such as those from the New York State DMV FAQ sectionin Fig.1 Question sentences from different categories often share similar information or language. patterns. This phenomenon becomes more obvious when categories are hierarchical. Fig.2. shows one example of questions sharing similar information from different categories. This cross-category shared patterns are not only shown in questions but can also be found in. answers corresponding to these questions.. Another unique characteristic for question classification is the well prepared answer set with detailed descriptions or instructions for each corresponding question category. These. answer sets generally cover a broader range of vocabulary (than the questions themselves) and carry more distinctive semantic meanings for each class. We believe there is great.\n1: Driver License/Permit/Non-Driver ID a: Apply for original. (49 questions) b: Renew or replace. (24 questions) 2: Vehicle Registrations and Insurance a: Buy, sell, or transfer a vehicle. (22 questions) b: Registration and title requirements (42 questions) 3: Driving Record / Tickets / Points.\nFigure 1: Examples from the NYDMV FAQ section. There are 8 top-level categories, 47 sub. categories, and 537 questions (388 unique questions; many questions fall into multiple categories).\nCategory: Finance Q: How to get a personal loan from the bank?. Category: Education Q: What are the steps for applying for student loan?\nFigure 2: Examples of questions from two different categories. These questions ask for the simila problem even if they are in different classes. Their answers also contain similar information..\nTo exploit the hierarchical and overlapping structures in question categories and extra information. from answer sets, we consider dictionary learning (Aharon et al.,. 2005}Roth & Black2005 Lee et al.]2007} Cande & Wakin2008 Kreutz-Delgado et al.2003f Rubinstein et al.[2010) which is one common approach for representing samples from a vast, correlated groups with external infor-. mation. This learning procedure first builds a dictionary with a series of grouped bases. These bases. can be initialized randomly or from external data (from the answer set in our case) and optimized during training through Sparse Group Lasso (SGL) (Simon et al.|2013). There are many promis-. ing improvements which have been achieved recently by this grouped-dictionary learning-based. methods (Zhao et al.]2016) Rao et al.]2016). We also showcase some preliminary experiments in Section |6|for question classification with SGL, and the performance is indeed extraordinary com-. pared with baselines but still lose to the CNNs-based method. Considering the unique advantages. from the SGL-based model and the CNNs-based model, we believe that performance of question. classification will have another boost if we could put SGL-based and CNNs-based model within the. same end-to-end framework. This requires us to design a new neural-based model which behaves. similarly with SGL.\nBased on the above observations, we first propose a novel Group Sparse Autoencoders (GSA). Th. objective of GSA and SGL are very similar. The encoding matrix of GSA (like the dictionary ir SGL) is grouped into different categories. The bases in different groups can be either initializec randomly or by the sentences in corresponding answer categories. Each question sentence will b reconstructed by some bases within a few groups. To the best of our knowledge, GSA is the first ful. neural network based model with group sparse constraints. GSA has can be either linear or nonlinea. encoding or decoding while SGL is restrained to be linear. In order to incorporate both advantages. from GSA and CNNs, we then propose a new Group Sparse Convolutional Neural Networks (GSC. NNs) by implanting the GSA into CNNs between the convolutional layer and the classification layer GSCNNs are jointly trained end-to-end neural-based framework for getting question representations. with group sparse constraint from both answer and question sets. Experiments show significant im. provements over strong baselines on four datasets..\nWe first review the basic autoencoders and sparse autoencoders to establish the mathematical nota tions. Then we propose our new autoencoder with group sparse constraints in later section\nAs introduced in (Bengio et al.]2007), autoencoder is an unsupervised neural network which could learn the hidden representations of input samples. An autoencoder takes an input instance z E Rd, and then maps it into a hidden space in the form of h E Rs through a deterministic mapping function h = e(z) = (Wz + b), where 0 = {W,b}. W is a d s projection matrix and b is the bias term. The projection function can be linear or non-linear function such as sigmoid. This projection process often can be recognized as encoding process. The encoded hidden representation is then mapped back to the original input space to reconstruct a vector z E Rd with function z = e,(h) = (W'h + c) with 0' = {W', c}. The reverse projection matrix W' may optionally be constrained by W' = WT. This reverse operation can be recognized as a decoding process which tries to reconstruct a new vector z such that the difference between z and z are as small as possible by minimizing the average reconstruction error:\nwhere L is a loss function such as minimum square error L(z, ) = z - [2. Depending on the applications. this loss function also can be defined in form of computing the reconstruction cross-entropy between z and z:"}, {"section_index": "3", "section_name": "2.2 SPARSE AUTOENCODERS", "section_text": "Sparse autoencoders (Ng2011 Makhzani & Frey2014) shows interesting results of getting visualization ol the hidden layers. Recall that h', represents the activations of jth hidden unit for a given specific input zy. Ther the average activation of hidden unit j (average over the training batch) can be defined as:.\n1 m 2 m i=1\nwhere m is the number of samples in training batch. The goal of sparse autoencoders is to enforce the constraint\nS 1- p > KL(pl|Pj) = p log log 1-Pj j=1 j=1\nwhere s is the number of units in hidden layer, and j is the index of the hidden unit. This penalty term is based on KL divergence which measures the difference between two different distributions..\n3 ) = argmin J(W, b, c) = argmin L(z(i) z(i)) W,b,c n W,b,c n i=\nd Lc(z,z)=-(zk log2k+(1-zk) log(1-zk) k=1\nWhen the dimensionality of the hidden space s is smaller than the dimensionality of the input space d. The network is forced to learn a compressed representation of the input. If there is structure or feature correlation in the data, the linear autoencoders often ends up learning a low-dimensional representation like PCA. Most of the time, autoencoders learns a compressed representation when the number of hidden units s being small. However, when the number of hidden units becomes larger than the dimensionality of input space, there are still some interesting structure that can be discovered by imposing other constraints on the network. The following discussed sparse constraints is one of them.\nPj = P\nIn order to achieve the above objective, there will be an extra penalty term in our optimization function which tries to reconstruct the original input with as few hidden layer activations as possible. The most commonly used penalty term (Ng2011) is as follows:\nThen our new objective of the sparse autoencoders is defined as follows:\nJsparse(W, b, c) = J(W, b, c) + a K L(p|Pj) j=1\nwhere J(W, b, c) is defined in Eq.1] and a controls the weights of the sparsity penalty term. Note that the term 0; is implicitly controlled by W, b and c. This is one of the difference between sparse autoencoders and sparse. coding which will be discussed in details in Section|6.\nAs described above, sparse autoencoder has similar objective with sparse coding which tries to find sparse representations for input samples. Inspired by the motivations from group sparse lasso (Yuan & Lin]2006) and sparse group lasso (Simon et al.|2013), we propose a novel Group Sparse Autoencoders (GSA)in this paper.\nDifferent from sparse autoencoders, in our GSA, the weight matrix is categorized into different groups. Fo. a given input, GSA reconstructs the input signal with the activations from only a few groups. Similar to the average activation defined in Eq.2|for sparse autoencoders, in GSA, we define each grouped average activation. for the hidden layer as follows:\nSimilar with Eq4] we also use KL divergence to measure the difference between estimated intra-group activa tion and goal group sparsity as follows:\nG 1-n K L(n|[1p) = nlog n 1og 1-^p p=1\nwhere G is the number of groups. When we only need inter-group constraints, the loss function of autoencoders can be defined as follows:\nIn some certain cases, inter- and intra- group sparsity are preferred and the same time. Then objective can be defined as follows:\nS Jgs(W,b,c) = J(W,b,c) + a K Lp\\\\Pj j=1 G K L(n|1p) + 3 p=1\nInter-group sparse autoencoders defined in Eq.8 has similar functionality with group sparse lasso in (Yuan & Lin2006). Inter- and intra- group sparse autoencoders which defined in Eq.[9behaves similarly to sparse group lasso in (Simon et al.2013). Different from the sparse coding approach, the encoding and decoding process. could be nonlinear while sparse coding is always linear..\nm g 1 l|hp,|I2 mg i=1 l=1\nwhere g represents the number of samples in each group, and m represents the number of samples in training batch. n, first sums up all the activations within pth group, then computes the average pth group respond across different samples' hidden activations.\ng Jqs(W,b,c) = J(W,b,c) + K L(n\\^p) l=1\nSimilar to sparse coding approach, the projection matrix in GSA works like a dictionary which includes all the. necessary bases for reconstructing the input signal with the activations in the hidden layer. Different initializa- tion methods for projection matrix are described in Section|5.\n12 3 3 4 - 9 10 5 - 67 15 20 8 25 10 10 15 20 25 (a) (b) C\nFigure 3: The input figure with hand written digit O is shown in (a). Figure (b) is the visualizatior of projection matrix W. Different rows represent different groups of W in Eq.9 For each group we only show the first 15 (out of 50) bases. The red numbers on the left side of (b) are the index of different groups(10 groups in total). Figure (c) is the projection matrix visualization from a basic autoencoders."}, {"section_index": "4", "section_name": "3.1 VISUALIZATION FOR GROUP SPARSE AUTOENCODERS", "section_text": "In our experiments, we use 10, O00 samples for training. We set the size of hidden layer as 500 with 10 differen groups for GSA. We set the intra-group sparsity p equal to 0.3 and inter-group sparsity n equal to 0.2. and are equal to 1. On the other hand, we also train the same 10, 000 examples on basic autoencoders with random noise added to the input signal (denoising autoencoders (Vincent et al.|2008)) for better hidden informatior extraction. We add the same 30% random noise into both models. Note that the group size of this experiments does not have to be set to 10. Since this is the image dataset with digit numbers, we may use fewer groups tc train GSA.\nIn Fig.3[b), we could find similar patterns within each group. For example, the 8th. group in Fig.3(b) has different forms of digit 0, and 9th group includes different forms of digit 7. However, it is difficult to tell any meaningful patterns from the projection matrix of basic autoencoders in Fig.3(c).\nFig.4|shows the hidden activations respect to the input image in Fig.3(a). From the results, we can tell that most of the activations of hidden layer are in group 1, 2, 6 and 8. And the 8th group has the most significant. activations. When we refer this activations to the projection matrix visualization in Fig.3(b). These results are. reasonable since the 8th row has the most similar patterns of digit 0..\n1.0 0.8 0.6 0.4 0.2 O0 100 200 300 400 500 1 2 3 14 5 6 78 9 10\nGSA could be directly applied to small image data (i.e. MINIST dataset) for pre-training. However, in the. tasks which prefer dense, semantic representation (i.e. sentence classification), we still need CNNs to learn the\nFigure 4: The hidden activations h respect to the input image in Fig.3(a). The red numbers corre sponds to the index in Fig.3(b). These activations come from 10 different groups. The group size here is 50.\nAny interesting places visit Lisbon ? Group Sparse Auto-Encoder Z wT h ---. Feed into NN for classification - :: : Pooling Convolutional Layer Z-\nFigure 5: Framework used in our model. We add extra encoding layer in CNNs. Sentence rep-. resentation after convolutional layer is denoted as z, and W is the projection matrix (functions as dictionary) in Eq.9 Hidden group sparse representation for question sentence is denoted as h.. Different colors in projection matrix represent different groups. We show WT' instead of W in the. figure for better visualization purpose. The darker color in h means larger value and white means. Zero.\nsentence representation automatically. In this scenario, in order to incorporate both advantages from GSA and CNNs. we propose Group Sparse Convolutional Neural Networks in the following section.."}, {"section_index": "5", "section_name": "4 GROUP SPARSE CONVOLUTIONAL NEURAL NETS", "section_text": "Convolutional neural networks (CNNs) were first proposed by (LeCun et al.|1995) in computer vision. For a given image, CNNs apply convolution kernels on a series of continuous areas on images. This concept was firs adapted to NLP by (Collobert et al.2011). Recently, many CNNs-based techniques achieve great successes in sentence modeling and classification (Kim2014] Kalchbrenner et al.]2014] Ma et al.2015). For simplicity we use the sequential CNNs (Kim2014) as our baseline.\nXi,j = Xi Xi+1 ... Xi+j\nA convolution operates a filter w E Rne to a window of n words xi,i+n with bias term b' described in Eq.11 to produce a new feature.\nwhere o is a non-linear activation function such as rectified linear unit (ReLu) or sigmoid function. The filter w is applied to each word in the sentence, generating the feature map a E RL:\nThe convolution described in Eq.11|can be regarded as feature detection: more similar patterns will returr higher activation. In sequential CNNs, max-over-time pooling (Collobert et al.[ 2011}Kim2014) operates over the feature map to get the maximum activation a = max{ a} representing the entire feature map. The ide: is to detect the strongest activation over time. This pooling strategy also naturally deals with sentence length Variations.\nIn order to capture different aspects of patterns, CNNs usually randomly initialize a set of filters with differen sizes and values. Each filter will generate a feature as described above. To take all the features generated by IV different filters into count, we use z = [a1, :.. , av] as the final representation.\nFollowing sequential CNNs, one dimensional convolution operates the convolution kernel in sequential order in Eq.[10] where x; E Re represents the e dimensional word representation for the i-th word in the sentence, and is the concatenation operator. Therefore xi,j refers to concatenated word vector from the i-th word to the (i + j)-th word in sentence:\nai =0(WXi,i+n+ b')\na = a1, a2,.. , al\nIn order to obtains the hidden representations for each sentence representation, we proposed a Group Sparse Convolutional Neural Networks (GSCNNs) by placing one extra layer between convolutional layer and classi- fication layer. This extra layer is trying to mimic the functionality of GSA that we introduced in Section[2."}, {"section_index": "6", "section_name": "5 EXPERIMENTS", "section_text": "Since there has been little effort to use answer sets in question classification, we did not find any well-fittec datasets which are publicly available. We collected two datasets and use other two well-known datasets in ou. experiments. The statistics of these datasets is summarized in Table|1 The descriptions of each dataset are as follows:\nTable 1: Summary of dataset statistics. C, represent the number of top categories, and C, represent the number of sub-category. Note we only do top level classification on TREC. Ndata is dataset size. Ntest is the size for test set. Nans is the size of answer set.\n3http://webscope.sandbox.yahoo.com/catalog.php?datatype=l\nIn conventional CNNs, z will be directly fed into classifiers after the sentence representation is obtained, e.g. fully connected neural networks in (Kim2014). There is no easy way for CNNs to explore the possible hidden representations with interesting underlaying structures..\nOur proposed framework is shown in Fig. 5 The convolutional layer show in Fig.5 follows the traditional. convolution process which is described previously. After the convolutional layer, we get z which is the feature. map for each sentence. The feature maps z is treated as the feature representation for each sentence. In stead of directly feeding z into a fully connected neural network for classification, we enforce the group sparse constraint. on z like the group sparse constraint we have on h in Eq.9 Then, we use the hidden representation h in Eq.9. as new sentence representation. The last step is feeding the hidden representation h into fully connected neural. network for classification. The parameters W, b, and c in Eq.9will also be fine tunned during the last step..\nn order to improve the robustness of the hidden representation and prevent it from simply learning the iden. ity, we follow the idea of decisioning autoencoders (Vincent et al. 2008) to add random noise (10% in our xperiments) into z. The training process of our model is similar to the training process in stack autoencoders Bengio et al.2007).\nIn order to prevent the co-adaptation of the hidden unites, we employ random dropout on penultimate layer [Hinton et al.|2014). We set the drop out rate as 0.5 and learning rate as 0.95 by default. In our experiments, training is done through stochastic gradient descent over shuffled mini-batches with the Adadelta update rule Zeiler2012). All the settings of the CNNs are the same as the settings in (Kim2014).\n. TREC The TREC datase?|is a factoid question classification dataset. The task is to classify each question. 1 into one of the 6 different question types (Li & Roth2002). The reason we include this factoid. questions dataset is to show the effectiveness of the proposed method in an frequently used dataset. even there is no categorized answer sets available.. . Insurance This is a private dataset which we collected from a car insurance company's website. Each. question is classified into the 319 possible classes with corresponding answer data. All questions. which belongs to the same category share the same answers. All answers are generated manually. Most questions have multiple assigned labels. DMV dataset We collected this dataset from New York State DMV's FAO website. We will make this data publicly available in the future. . Yahoo Ans The Yahoo!_Answers dataset (Fleming et al.2012) Shah & Pomerantz2010) is a publicly. available dataset3|There are more than 4 million questions with answers. For simplicity reasons, we. only randomly sample 8,871 questions from the complete dataset. There are 27 top level categories. across different domains. To make our task more realistic and challenging, we test the proposed. model with respect to the subcategories and there are 678 classes.\nCt Cs Ndata Ntest Nans Datasets Multi-label ? TREC 6 50 5952 500 No - Insurance 319 1580 303 2176 Yes - DMV 8 47 388 50 2859 Yes Yahoo Ans. 27 678 8871 3027 10365 No\nYahoo dataset TREC Insurance DMV sub top unseen CNNs 93.6 51.2 60 20.8 53.9 47 WR 93.8 53.5 62 21.8 54.5 48 Wq 94.2 53.8 64 22.1 54.1 48 WA 55.4 66 22.2 55.8 53\nTable 2: Experiments with four datasets. Baseline is from sequential CNNs. Wr means the pro jection matrix is random initialized. Wo represents the projection matrix is initialized by clustering the question sentences. WA represents the performance of the model whose projection matrix is initialized by answer set. There are three different settings for Yahoo dataset: classification on subcategory, classification on top level category and classification on unseen sub-labels.\nWe only compare our model's performance with CNNs for two following reasons: we consider our \"group. sparse\" as a modification to the general CNNs for grouped feature selection. This idea is \"orthogonal\"' to any. other CNNs-based models and can be easily applied to them; another reason is, as discussed in Sec.1 we dic. not find any other model which can be used for comparison in soloving question classification task with answe SetS.\nThe datasets we use in the experiments require the label information for both questions and answers. Besides that, similar with websites' FAQ section, all the questions which belong to the same category share the same answer sets. Among the above the four datasets, only the Insurance and DMV datasets are well-fitted for ou model. The questions which fall into the same category have different answers in Yahoo dataset.\nDifferent ways of initializing the projection matrix in Eq.[9lcan be summarized as the followings.\nIn the cases of single-label classification tasks (TREC and Yahoo dataset), we set the last layer as softmax-laye. which tries to get one unique peaky choice across all other labels. But in the cases for multi-label classificatior (Insurance and DMV dataset), we replace the softmax-layer in CNNs with sigmoid-layer since sigmoid laye predicts each category independently while softmax function has an exclusive property which allows cross influence between categories.\nIn the experiments with Yahoo dataset, the improvement is not as signification as Insurance and DMV. One reason for this is the questions in Yahoo dataset are usually too short, sometime only have 2 to 3 words. Wher the sentences become shorter, the group information become harder to encode. Another reason is that the questions in Yahoo dataset are always single labeled, and can not fully utilize the benefits of group sparse properties. Yahoo-top shows the results of top categories classification results. We map the subcategories back to the top categories and get the results in Table2\nRandom Initialization: when there is no answer corpus available, we first random initialize N. vectors (usually N > s) to represent the representation from answer set. Then we cluster these N vectors into G categories with g centroids for each category. These centroids from different categories will be the initialized bases for projection matrix W. This projection matrix will be optimized during. training. .Initialization from Questions: instead of using random initialized vectors, we could also use ques-. tion sentences for initializing the projection matrix when answer set is not available. We need to. pre-train the sentence with CNNs to get the sentence representation. We then select top G largest. categories in terms of number of question sentences. Then we get g centroids from each category by. k-means. We concatenate these G g vectors group after group to form the projection matrix. We. need to pre-train the sentence with CNNs to get the sentence representation.. Initialization from Answers: This is the most ideal case. We follow the same procedure from. above. The only difference is that we then treat the answer sentence as question sentence to pre-train. the CNNs to get answer sentence representation..\nAll the experiments results are summarized in Table2 TREC dataset is factoid question type classification We include this experiments to show our performance on a frequently used dataset. Proposed method improves marginally over baseline because the sentences are too short in TREC dataset. For Insurance and DMV dataset the improvement is significant.\nBesides the conventional classification tasks, we also test our proposed model on unseen-label experiments. In this experiments, there are a few sub-category labels that are not included in training process. However, we still hope that our model could correctly classify these unseen sub-category label into correct parent category based on the model's sub-category estimation. In the testing set of Yahoo dataset, we randomly add 100 questions\nvanilla k-NN. 31.2 k-NN based Model k-NN + SGL 32.2 vanilla SVM 33.7 SVM based Model SVM + SGL 44.5 CNNs based Model vanilla CNNs 51.2\nTable 3: Experiments for two baseline model, k-NN and SVM, for the Insurance dataset\nwhose labels are unseen in training set. The classification results of Yahoo-unseen in Table 2 are obtained by mapping the subcategory classification results to top level category and check whether the true label's top category match with predicted label's parent category. The improvements are remarkable due to the group information encoding."}, {"section_index": "7", "section_name": "6 DISCUSSION", "section_text": "The idea of reforming signal to a sparse representation is first introduced in the domain of compressed sensing. [Cande & Wakin2008) which achieves great success in signal compression, visualization and classification task. Especially when dictionary is well trained, the performance usually improves significantly, as shown ir Wang et al.]2010][Yang et al.]2009) for image classification tasks. In Table[3] we test the influence of Sparse Group Lasso (SGL) (Simon et al.||2013) with two baseline methods, k-Nearest Neighbor (k-NN) and SVM or the Insurance dataset. We use TF-IDF as feature representation for each question and answer sentence. We. first select all the answer sentences from top 20 largest category and then find 10 centroids for each of these. categories by k-Means. Then we have a dictionary with 200 centroids with 20 groups. We notice there is a. great improvement of performance after we preprocess the original sentence representations with SGL before. we use SVM. We further test the performance of CNNs on the same dataset, and CNNs outperforms SVM and k-NN even with SGL because of the well trained sentence representation through CNNs. However, for vanilla. CNNs, it is not straightforward to embed SGL into the network and still get good representation for sentences. since SGL will break the training error in backpropagation..\nHowever, GSA is fully neural network based framework. Our proposed GSA has similar functionalities to SGL [Yuan & Lin2006}Simon et al.| 2013), as it is shown in Fig.3and Fig.4] but in different approach.. Compared with sparse coding approaches which have intense optimizations on both dictionary and coding. GSA's optimization is based on simple backpropagation. GSA also can be easily placed into any neural network. for joint training. Another advantage of GSA over sparse coding is that the projection function in GSA can be linear or non-linear, while sparse coding always learns linear codings.."}, {"section_index": "8", "section_name": "CONCLUSIONS AND FUTURE WORK", "section_text": "In this paper, we first present a novel GSA framework which functions as dictionary learning and sparse coding. models with inter- and intra- group sparse constraints. We also prove GSA's learning ability by visualizing the projection matrix and activations. We further propose a group sparse convolutional neural networks by embed. ding GSA into CNNs. We show that CNNs can benefit from GSA by learning more meaningful representation. from dictionary."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Michal Aharon, Michael Elad, and Alfred Bruckstein. K-svd: Design of dictionaries for sparse representation In IN: PROCEEDINGS OF SPARS05, pp. 9-12, 2005\nEmmanuel J. Cande and Michael B. Wakin. An Introduction To Compressive Sampling. In Signal Processing Magazine, IEEE, volume 25, 2008. URLhttp://dx.doi.0rg/10.1109/msp.2007.914731\nYoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle. Greedy layer-wise training of deep networks. In B. Scholkopf, J.C. Platt, and T. Hoffman (eds.), Advances in Neural Information Pro cessing Systems 19, pp. 153-160. MIT Press, 2007. URL http://papers.nips.cc/paper/ 3048-greedy-layer-wise-training-of-deep-networks.pdf\nGeoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. Journal of Machine Learning Research 15, 2014.\nKenneth Kreutz-Delgado, Joseph F. Murray, Bhaskar D. Rao, Kjersti Engan, Te-Won Lee, and Terrence Sejnowski. Dictionary learning algorithms for sparse representation. 2003.\nY. LeCun, L. Jackel, L. Bottou, A. Brunot, C. Cortes, J. Denker, H. Drucker, I. Guyon, U. Mller, E. Sckinger P. Simard, and V. Vapnik. Comparison of learning algorithms for handwritten digit recognition. In INTER NATIONAL CONFERENCE ON ARTIFICIAL NEURAL NETWORKS, pp. 53-60, 1995\nHonglak Lee, Alexis Battle, Rajat Raina, and Andrew Y. Ng. Efficient sparse coding algorithms. In In NIPS pp. 801-808. NIPS, 2007.\nDavid D Lewis, Yiming Yang, Tony G Rose, and Fan Li. Rcv1: A new benchmark collection for text catego rization research. Journal of machine learning research, 5(Apr):361-397, 2004.\nMingbo Ma, Liang Huang, Bing Xiang, and Bowen Zhou. Dependency-based convolutional neural network for sentence embedding. In Proceedings of ACL 2015, 2015..\nAlireza Makhzani and Brendan Frey. K-sparse autoencoders. In International Conference on Learning Repre sentations. 2014.\nNikhil Rao, Robert Nowak, Christopher Cox, and Timothy Rogers. Classification with the sparse group lassc IEEE Transactions on Signal Processing, 64(2):448-463, 2016.\nStefan Roth and Michael J. Black. Fields of experts: A framework for learning image priors. In In CVPR, pp 860-867, 2005.\nR. Rubinstein, A. M. Bruckstein, and M. Elad. Dictionaries for sparse representation modeling. 2010\nNoah Simon, Jerome Friedman, Trevor Hastie, and Rob Tibshirani. A sparse-group lasso. 2013.\nPascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting and composing robust features with denoising autoencoders. pp. 1096-1103, 2008.\nMing Yuan and Yi Lin. Model selection and estimation in regression with grouped variables. volume 68, pp 49-67, 2006.\nize Zhao, Matthias Chung, Brent A Johnson, Carlos S Moreno, and Qi Long. Hierarchical feature selection incorporating known and novel biological information: Identifying genomic features related to prostate cancer recurrence. Journal of the American Statistical Association, (just-accepted), 2016."}] |
rJeKjwvclx | [{"section_index": "0", "section_name": "DYNAMIC COATTENTION NETWORKS FOR OUESTION ANSWERING", "section_text": "Caiming Xiong* Victor Zhong* Richard Socher\ncxiong, vzhong, rsocher}@salesforce.com\nSeveral deep learning models have been proposed for question answering. How. ever, due to their single-pass nature, they have no way to recover from local max. ima corresponding to incorrect answers. To address this problem, we introduce the Dynamic Coattention Network (DCN) for question answering. The DCN first. fuses co-dependent representations of the question and the document in order to focus on relevant parts of both. Then a dynamic pointing decoder iterates over po-. tential answer spans. This iterative procedure enables the model to recover from. initial local maxima corresponding to incorrect answers. On the Stanford questior answering dataset, a single DCN model improves the previous state of the art from 71.0% F1 to 75.9%, while a DCN ensemble obtains 80.4% F1."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Question answering (QA) is a crucial task in natural language processing that requires both natural. language understanding and world knowledge. Previous QA datasets tend to be high in quality due. to human annotation, but small in size (Berant et al., 2014; Richardson et al., 2013). Hence, they did. not allow for training data-intensive. expressive models such as deep neural networks.\nTo address this problem, researchers have developed large-scale datasets through semi-automated techniques (Hermann et al., 2015; Hill et al., 2016). Compared to their smaller, hand-annotated counterparts, these QA datasets allow the training of more expressive models. However, it has been shown that they differ from more natural, human annotated datasets in the types of reasoning required to answer the questions (Chen et al., 2016).\nRecently, Rajpurkar et al. (2016) released the Stanford Question Answering dataset (SQuAD), which. is orders of magnitude larger than all previous hand-annotated datasets and has a variety of qualities. that culminate in a natural QA task. SQuAD has the desirable quality that answers are spans in a. reference document. This constrains answers to the space of all possible spans. However, Rajpurkar. et al. (2016) show that the dataset retains a diverse set of answers and requires different forms of logical reasoning, including multi-sentence reasoning..\nWe introduce the Dynamic Coattention Network (DCN), illustrated in Fig. 1, an end-to-end neura. network for question answering. The model consists of a coattentive encoder that captures the interactions between the question and the document, as well as a dynamic pointing decoder tha alternates between estimating the start and end of the answer span. Our single model obtains an F1. of 75.9% compared to the best published result of 71.0% (Yu et al., 2016). In addition, our ensemble model obtains an F1 of 80.4% compared to the second best result of 78.1% on the official SQuAD leaderboard.1"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "As of Nov. 3 2016. See https: //rajpurkar.github.io/sQuAD-explorer/ for latest results\nFigure 1 illustrates an overview of the DCN. We first describe the encoders for the document anc the question, followed by the coattention mechanism and the dynamic decoder which produces the. answer span.\nDynamic pointer decoder Coattention encoder start index: 49 end index: 51 steam turbine plants Document encoder Question encoder The weight of boilers and condensers generally. makes the power-to-weight ... However, most. What plants create most. electric power is generated using steam turbine electric power? plants, so that indirectly the world's industry. is\nFigure 1: Overview of the Dynamic Coattention Network\nThe question embeddings are computed with the same LSTM to share representation power: qt = LSTMenc(qt-1, xt J. We define an intermediate question representation Q' = [q1 ... qn qo] Re(n+1). To allow for variation between the question encoding space and the document encod- ing space, we introduce a non-linear projection layer on top of the question encoding. The final representation for the question becomes: Q = tanh (w(Q)Q' + b(Q)) E Rl(n+1)"}, {"section_index": "3", "section_name": "2.2 COATTENTION ENCODER", "section_text": "We propose a coattention mechanism that attends to the question and document simultaneously similar to (Lu et al., 2016), and finally fuses both attention contexts. Figure 2 provides an illustration of the coattention encoder.\nWe first compute the affinity matrix, which contains affinity scores corresponding to all pairs of document words and question words: L = DTQ E R(m+1)(n+1). The affinity matrix is nor- malized row-wise to produce the attention weights AQ across the document for each word in the question, and column-wise to produce the attention weights AP across the question for each word in the document:\nAQ = softmax(L) E R(m+1)x(n+1) and AD =softmax(LT) E R(n+1)(m+1\nNext, we compute the summaries, or attention contexts, of the document in light of each word of tl question.\nJestlonTencooer What plants create most electric power?\nQQ = DAQ E Rlx(n+1)\nU: Ut D: bi-LSTM bi-LSTM bi-LSTMbi-LSTMbi-LSTM m+1 document A Q concat product product YQ concat n+1\nFigure 2: Coattention encoder. The affinity matrix L is not shown here. We instead directly shov the normalized attention weights AD and AQ..\nWe similarly compute the summaries QAD of the question in light of each word of the document.. Similar to Cui et al. (2016), we also compute the summaries CQ AD of the previous attention con-. texts in light of each word of the document. These two operations can be done in parallel, as is shown in Eq. 3. One possible interpretation for the operation CQ AD is the mapping of question. encoding into space of document encodings.\nWe define CD, a co-dependent representation of the question and document, as the coattentior context. We use the notation [a; b] for concatenating the vectors a and b horizontally.\nThe last step is the fusion of temporal information to the coattention context via a bidirectional"}, {"section_index": "4", "section_name": "2.3 DYNAMIC POINTING DECODER", "section_text": "Due to the nature of SQuAD, an intuitive method for producing the answer span is by predicting the start and end points of the span (Wang & Jiang, 2016b). However, given a question-document pair, there may exist several intuitive answer spans within the document, each corresponding to a local maxima. We propose an iterative technique to select an answer span by alternating between predicting the start point and predicting the end point. This iterative procedure allows the model to recover from initial local maxima corresponding to incorrect answer spans.\nFigure 3 provides an illustration of the Dynamic Decoder, which is similar to a state machine whose. state is maintained by an LSTM-based sequential model. During each iteration, the decoder updates its state taking into account the coattention encoding corresponding to current estimates of the start. and end positions, and produces, via a multilayer neural network, new estimates of the start and end. positions.\nh; = LSTM dec. ha..\nwhere us,-, and ue,-1 are the representations corresponding to the previous estimate of the start an end positions in the coattention encoding U..\ncD = Q;CQ] AD E R2lx(m+1)\nUt = Bi-LSTM (ut-1, Ut+1, [dt;cP] E R2l\nLet h, S, and e, denote the hidden state of the LSTM, the estimate of the position, and the estimate of the end position during iteration i. The LSTM state update is then described by Eq. 5.\nL L S s T hi T USi-1 HMN HMN S argmax S; : 49 U49 argmax e; : 51 (steam) (turbine) u51 Uei We U: U48 U49 U50 U51 U52 48 49 50 51 52 steamm pianr trrnne ing IS\nFigure 3: Dynamic Decoder. Blue denotes the variables and functions related to estimating the start position whereas red denotes the variables and functions related to estimating the end position..\nGiven the current hidden state hi, previous start position us,-1, and previous end position ue,-1, estimate the current start position and end position via Eq. 6 and Eq. 7.\nSi = argmax (Q1,: t e; = argmax (i,: t\nwhere at and t represent the start score and end score corresponding to the tth word in the doc. ument. We compute Qt and t with separate neural networks. These networks have the same architecture but do not share parameters..\nBased on the strong empirical performance of Maxout Networks (Goodfellow et al., 2013) and High way Networks (Srivastava et al., 2015), especially with regards to deep architectures, we propose a Highway Maxout Network (HMN) to compute Qt as described by Eq. 8. The intuition behind us- ing such model is that the QA task consists of multiple question types and document topics. These variations may require different models to estimate the answer span. Maxout provides a simple and effective way to pool across multiple model variations.\nWe now describe the HMN model:\nHMN ( Ut. max tanh : Usi_ mt max 2) m max\nQt = HMN start Ut, hi, Us;-1, Ve;\nHere, ut is the coattention encoding corresponding to the tth word in the document. HMN start is illustrated in Figure 4. The end score, t, is computed similarly to the start score dt, but using a separate HMN end.\nwhere r E Rl is a non-linear projection of the cur- rent state with parameters W(D) E Rl5l. S the output of the first maxout layer with parame- (2) is the output of the second maxout layer with pa- rameters W(2) E Rpxlxl and b(2) E Rpxl. (1 and m+ (2) are fed into the final maxout layer, which is the pooling size of each maxout layer. The max operation computes the maximum value over the first dimension of a tensor. We note that there is highway connection between the output of the first maxout layer and the last maxout layer.\nTo train the network. we minimize the cumulative. softmax cross entropy of the start and end points. across all iterations. The iterative procedure halts. when both the estimate of the start position and the. estimate of the end position no longer change, or. when a maximum number of iterations is reached Details can be found in Section 4.1.\nStatistical QA Traditional approaches to question answering typically involve rule-based algorithm or linear classifiers over hand-engineered feature sets. Richardson et al. (2013) proposed two base lines, one that uses simple lexical features such as a sliding window to match bags of words, anc another that uses word-distances between words in the question and in the document. Berant et al (2014) proposed an alternative approach in which one first learns a structured representation of th entities and relations in the document in the form of a knowledge base, then converts the questioi to a structured query with which to match the content of the knowledge base. Wang et al. (2015 described a statistical model using frame semantic features as well as syntactic features such as pai of speech tags and dependency parses. Chen et al. (2016) proposed a competitive statistical baselin using a variety of carefully crafted lexical, syntactic, and word order features.\nNeural QA Neural attention models have been widely applied for machine comprehension oi question-answering in NLP. Hermann et al. (2015) proposed an AttentiveReader model with the release of the CNN/Daily Mail cloze-style question answering dataset. Hill et al. (2016) released another dataset steming from the children's book and proposed a window-based memory network Kadlec et al. (2016) presented a pointer-style attention mechanism but performs only one attentior step. Sordoni et al. (2016) introduced an iterative neural attention model and applied it to cloze-style machine comprehension tasks.\nRecently, Rajpurkar et al. (2016) released the SQuAD dataset. Different from cloze-style queries. answers include non-entities and longer phrases, and questions are more realistic. For SQuAD. Wang & Jiang (2016b) proposed an end-to-end neural network model that consists of a Match-LSTM encoder, originally introduced in Wang & Jiang (2016a), and a pointer network decoder (Vinyals et al., 2015); Yu et al. (2016) introduced a dynamic chunk reader, a neural reading comprehension model that extracts a set of answer candidates of variable lengths from the document and ranks them to answer the question.\nLu et al. (2016) proposed a hierarchical co-attention model for visual question answering, whicl achieved state of the art result on the COCO-VQA dataset (Antol et al., 2015). In (Lu et al., 2016) the co-attention mechanism computes a conditional representation of the image given the question. as well as a conditional representation of the question given the image..\nInspired by the above works, we propose a dynamic coattention model (DCN) that consists of a novel coattentive encoder and dynamic decoder. In our model, instead of estimating the start and end positions of the answer span in a single pass (Wang & Jiang. 2016b). we iteratively update the\nQ48 Q49 Q50Q51Q52 MAXOUT !! MAXOUT MAXOUT r U: U48 U49 U50 U51 U52 MLP 48 49 50 51 52 bu!sn steamm paand tunqnne\nFigure 4: Highway Maxout Network. Dotted lines denote highway connections.\nModel Dev EM Dev F1 Test EM Test F1 Ensemble DCN (Ours) 70.3 79.4 71.2 80.4 Microsoft Research Asia *. 69.4 78.3 Allen Institute *. 69.2 77.8 69.9 78.1 Singapore Management University 67.6 76.8 67.9 77.0 Google NYC * 68.2 76.7 Single model. DCN (Ours) 65.4 75.6 66.2 75.9 Microsoft Research Asia *. 65.9 75.2 65.5 75.0 Google NYC * 66.4 74.9 Singapore Management University. 64.7 73.7 Carnegie Mellon University *. 62.5 73.3 Dynamic Chunk Reader (Yu et al., 2016) 62.5 71.2 62.5 71.0 Match-LSTM (Wang & Jiang, 2016b) 59.1 70.0 59.5 70.3 Baseline (Rajpurkar et al., 2016) 40.0 51.0 40.4 51.0 Human (Rajpurkar et al., 2016) 81.4 91.0 82.3 91.2\nTable 1: Leaderboard performance at the time of writing (Nov 4 2016). * indicates that the model. used for submission is unpublished. - indicates that the development scores were not publicly available at the time of writing\nstart and end positions in a similar fashion to the Iterative Conditional Modes algorithm (Besag 1986)."}, {"section_index": "5", "section_name": "4.1 IMPLEMENTATION DETAILS", "section_text": "We train and evaluate our model on the SQuAD dataset. To preprocess the corpus, we use the tokenizer from Stanford CoreNLP (Manning et al., 2014). We use as GloVe word vectors pre- trained on the 840B Common Crawl corpus (Pennington et al., 2014). We limit the vocabulary to words that are present in the Common Crawl corpus and set embeddings for out-of-vocabulary words to zero. Empirically, we found that training the embeddings consistently led to overfitting and subpar performance, and hence only report results with fixed word embeddings.\nWe use a max sequence length of 600 during training and a hidden state size of 200 for all recurrent. units, maxout layers, and linear layers. All LSTMs have randomly initialized parameters and an. initial state of zero. Sentinel vectors are randomly initialized and optimized during training. For the dynamic decoder, we set the maximum number of iterations to 4 and use a maxout pool size of 16. We use dropout to regularize our network during training (Srivastava et al., 2014), and optimize. the model using ADAM (Kingma & Ba, 2014). All models are implemented and trained with. Chainer (Tokui et al., 2015)."}, {"section_index": "6", "section_name": "4.2 RESULTS", "section_text": "Evaluation on the SQuAD dataset consists of two metrics. The exact match score (EM) calculates the exact string match between the predicted answer and a ground truth answer. The F1 score calculates the overlap between words in the predicted answer and a ground truth answer. Because a document-question pair may have several ground truth answers, the EM and F1 for a document question pair is taken to be the maximum value across all ground truth answers. The overall metric is then computed by averaging over all document-question pairs. The offical SQuAD evaluation is hosted on CodaLab 2. The training and development sets are publicly available while the test set is withheld.\nThe performance of the Dynamic Coattention Network on the SQuAD dataset, compared to othe submitted models on the leaderboard 3, is shown in Table 1. At the time of writing, our single model DCN ranks first at 66.2% exact match and 75.9% F1 on the test data among single-mode submissions. Our ensemble DCN ranks first overall at 71.6% exact match and 80.4% F1 on the tes data.\nFigure 5: Examples of the start and end conditional distributions produced by the dynamic decoder Odd (blue) rows denote the start distributions and even (red) rows denote the end distributions. indicates the iteration number of the dynamic decoder. Higher probability mass is indicated by darker regions. The offset corresponding to the word with the highest probability mass is showr on the right hand side. The predicted span is underlined in red, and a ground truth answer span is underlined in green.\nFor example, Question 1 in Figure 5 demonstrates an instance where the model initially guesses an incorrect start point and a correct end point. In subsequent iterations, the model adjusts the star point, ultimately arriving at the correct start point in iteration 3. Similarly, the model gradually shifts probability mass for the end point to the correct word.\nQuestion 2 shows an example in which both the start and end estimates are initially incorrect. Th model then settles on the correct answer in the next iteration.\n3https://rajpurkar.github.io/SQuAD-explorer\nThe DCN has the capability to estimate the start and end points of the answer span multiple times each time conditioned on its previous estimates. By doing so, the model is able to explore local. maxima corresponding to multiple plausible answers, as is shown in Figure 5..\nQuestion 1: Who recovered Tolbert's fumble?\ns : 5 i=1 star e : 22 i=1 end i=2 start s:6 e : 22 i=2 end s : 2 21 i=3 sta e : 22 i=3 end Burpe aiqwny whhie ppdde ihrereeee The Ate . OISS : posssssn Answer: Danny Trevathan Groundtruth: Danny Trevathan Question 2 :What did the. Kenyan business people hope for when meeting with the. Chinese? s : 66 i=1 start i=1 end e : 66 i=2 start s : 84 i=2 end e : 9 94 210Z planned suuern aes henne 3 heeve reekeed horegn pue reeheed smness suane Answer: gain Support from China for a planned $2.5 billion railway Groundtruth: supportfrom China for planned $2.5 billion railway a Question 3: What kind ofweapons did Tesla's treatise concern? s : 23 i=1 start e : 25 i=1 end i=2 start s : 24 e : 26 i=2 end s : 23 i=3 start i=3 end e : 25 i=4 start s : 24 e : 26 i=4 end suodeam uease esa deewnmnt punodxa = aye Beaane deessnes pue .. Answer: particle beam weapons Groundtruth: charged particle beam igure 5: Examples of the start and end conditional distributions produced by the dynamic decoder. Odd (blue) rows denote the start distributions and even (red) rows denote the end distributions. i\n1.2 - 1.0 0.8 E 0.6 - 0.4 0.2 0.0 0 100 200 300 400 500 600 700 0 5 10 15 20 25 30 35 0 5 10 15 20 25 # Tokens in Document # Tokens in Question Average # Tokens in Answer\nWhile the dynamic nature of the decoder allows the model to escape initial local maxima corre sponding to incorrect answers, Question 3 demonstrates a case where the model is unable to decide between multiple local maxima despite several iterations. Namely, the model alternates between the answers \"charged particle beam'' and \"particle beam weapons'' indefinitely. Empirically, we observe that the model, trained with a maximum iteration of 4, takes 2.7 iterations to converge to an answer On average\nModel Ablation The perfor mance of our model and its Model Dev EM De ablations on the SQuAD de- velopment set is shown in Ta- Dynamic Coattention Network (DCN) ble 2.On the decoder side. pool size 16 HMN 65.4 we experiment with various pool size 8 HMN 64.4 pool sizes for the HMN max- pool size 4 HMN 65.2 out layers, using a 2-layer DCN with 2-layer MLP instead of HMN 63.8 MLP instead of a HMN, and DCN with single iteration decoder 63.7 forcing the HMN decoder to DCN with Wang & Jiang (2016b) attention 63.7 a single iteration.Empiri- cally, we achieve the best per- formance on the development Table 2: Single model ablations on the development set. set with an iterative HMN with pool size 16, and find that the model consistently benefits from a deeper, iterative de network. The performance improves as the number of maximum allowed iterations increases little improvement after 4 iterations. On the encoder side, replacing the coattention mechanism. an attention mechanism similar to Wang & Jiang (2016b) by setting CD to QAP in equation sults in a 1.9 point F1 drop. This suggests that, at an additional cost of a softmax computatio. a dot product, the coattention mechanism provides a simple and effective means to better er the document and question sequences. Further studies, such as performance without attentio performance on questions requiring different types of reasoning can be found in the appendix.\nance of our model and its. M lations on the SQuAD de-. lopment set is shown in Ta-. D e 2. On the decoder side.. e experiment with various. po ol sizes for the HMN max-. po t layers, using a 2-layer. D LP instead of a HMN. and. D rcing the HMN decoder to D single iteration. Empiri-. lly, we achieve the best per-. rmance on the development. t with an iterative HMN. ith pool size 16, and find that tl. twork. The performance improv. tle improvement after 4 iterations attention mechanism similar to. lts in a 1.9 point F1 drop. This s. dot product, the coattention mec. e document and question sequen\nPerformance across length One point of inter- est is how the performance of the DCN varies with respect to the length of document. Intu- itively, we expect the model performance to de- teriorate with longer examples, as is the case with neural machine translation (Luong et al.. 2015). However, as in shown in Figure 6, there is no notable performance degradation for longer documents and questions contrary to our expectations. This suggests that the coattentive encoder is largely agnostic to long documents, and is able to focus on small sections of rel. evant text while ignoring the rest of the (po- tentially very long) document. We do note a performance degradation with longer answers. However, this is intuitive given the nature of the evaluation metric. Namely, it becomes increas-\n1.2 1.0 -0 0.8 E 0.6 0.4 0.2 0.0 0 100 200 300 400 500 600 7000 5 10 15 20 25 30 35 0 5 10 15 20 25 # Tokens in Document # Tokens in Question Average # Tokens in Answer\nFigure 6: Performance of the DCN for various lengths of documents, questions, and answers. The blue dot indicates the mean F1 at given length. The vertical bar represents the standard deviation of F1s at a given length\nitS Model Dev EM Dev F1 de- Ta- Dynamic Coattention Network (DCN) de, pool size 16 HMN 65.4 75.6 ous pool size 8 HMN 64.4 74.9 ax- pool size 4 HMN 65.2 75.2 yer DCN with 2-layer MLP instead of HMN 63.8 74.4 Ind DCN with single iteration decoder 63.7 74.0 to DCN with Wang & Jiang (2016b) attention 63.7 73.7 iri-\nTable 2: Single mode1 ablations on the development set\n1.2 1.0 0.8 0.6 0.4 0.2 6073 1242 1187 712 642 474 150 90 0.0 What Who How When Which Where Why Other Question Type\nFigure 7: Performance of the DCN across ques-. tion types. The height of each bar represents the mean F1 for the given question type. The lower. number denotes how many instances in the dev set are of the corresponding question type..\ningly challenging to compute the correct word span as the number of words increases..\nBreakdown of F1 distribution Finally, we note that the DCN performance is highly bimodal. On. the development set, the mode1 perfectly predicts (100% F1) an answer for 62.2% of examples anc predicts a completely wrong answer (0% F1) for 16.3% of examples. That is, the model picks out. partial answers only 21.5% of the time. Upon qualitative inspections of the 0% F1 answers, some of. which are shown in Appendix A.4, we observe that when the model is wrong, its mistakes tend to. have the correct \"answer type' (eg. person for a \"who' question, method for a \"how' question) and. the answer boundaries encapsulate a well-defined phrase.."}, {"section_index": "7", "section_name": "5 CONCLUSION", "section_text": "We proposed the Dynamic Coattention Network, an end-to-end neural network architecture for ques tion answering. The DCN consists of a coattention encoder which learns co-dependent representa-. tions of the question and of the document, and a dynamic decoder which iteratively estimates the. answer span. We showed that the iterative nature of the model allows it to recover from initial lo-. cal maxima corresponding to incorrect predictions. On the SQuAD dataset, the DCN achieves the state of the art results at 75.9% F1 with a single model and 80.4% F1 with an ensemble. The DCN significantly outperforms all other models.."}, {"section_index": "8", "section_name": "ACKNOWLEDGMENTS", "section_text": "We thank Kazuma Hashimoto and Bryan McCann for their help and insights"}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Danqi Chen, Jason Bolton, and Christopher D. Manning. A thorough examination of the cnn/daily. mail reading comprehension task. In Association for Computational Linguistics (ACL), 2016\nYiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. Attention-ove attention neural networks for reading comprehension. arXiv preprint arXiv:1607.04423. 2016.\nIan J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron C Courville, and Yoshua Bengio. Max out networks. ICML (3), 28:1319-1327. 2013\nKarl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pp. 1693-1701, 2015.\nFelix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Readin children's books with explicit memory representations. In ICLR, 2016..\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8) 1735-1780, 1997.\nPerformance across question type Another natural way to analyze the performance of the model. is to examine its performance across question types. In Figure 7, we note that the mean F1 of DCN. exceeds those of previous systems (Wang & Jiang, 2016b; Yu et al., 2016) across all question types.. The DCN, like other models, is adept at \"when\"' questions and struggles with the more complex \"why\" questions.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.\nJiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. Hierarchical question-image co-attentior for visual question answering. arXiv preprint arXiv:1606.00061, 2016\nMinh-Thang Luong, Hieu Pham, and Christopher D. Manning. Effective approaches to attention-. based neural machine translation. In Proceedings of the 2015 Conference on Empirical Meth-. ods in Natural Language Processing, pp. 1412-1421. Association for Computational Linguistics,. September 2015. Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David McClosky. The stanford corenlp natural language processing toolkit. In ACL (System. Demonstrations), pp. 55-60, 2014. and Richard Socher. Pointer sentinel mixtur\nStephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843, 2016.\nJeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for worc representation. In EMNLP, volume 14, pp. 1532-43, 2014.\nMatthew Richardson, Christopher JC Burges, and Erin Renshaw. Mctest: A challenge dataset fo the open-domain machine comprehension of text. In EMNLP, volume 3, pp. 4, 2013.\nAlessandro Sordoni, Phillip Bachman, and Yoshua Bengio. Iterative alternating neural attention for machine reading. arXiv preprint arXiv:1606.02245, 2016.\nRupesh K Srivastava, Klaus Greff, and Juergen Schmidhuber. Training very deep networks. In Advances in Neural Information Processing Svstems 28. pp. 2377-2385. 2015\nHai Wang, Mohit Bansal, Kevin Gimpel, and David McAllester. Machine comprehension with syntax, frames, and semantics. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pp. 700-706. Association for Computational Linguistics 2015.\nShuohang Wang and Jing Jiang. Machine comprehension using match-LSTM and answer pointer arXiv preprint arXiv:1608.07905. 2016b"}, {"section_index": "10", "section_name": "A.1 PERFORMANCE WITHOUT ATTENTION", "section_text": "In our experiments, we also investigate a model without any attention mechanism. In this model the encoder is a simple LSTM network that first ingests the question and then ingests the document The hidden states corresponding to words in the document is then passed to the decoder. This mode. achieves 33.3% exact match and 41.9% F1, significantly worse than models with attention.\nWe generate predictions for examples requiring different types of reasoning, given by Rajpurka et al. (2016). Because this set of examples is very limited. they do not conclusively demonstrate th effectiveness of the model on different types of reasoning tasks. Nevertheless, these examples show that the DCN is a promising architecture for challenging question answering tasks including those that involve reasoning over multiple sentences.\nThe Rankine cycle is sometimes referred to as a practical Carnot cycle because, when an efficient turbine is used, the TS diagram begins to resemble the Carnot cycle..\nGround truth practical Carnot cycle\nPrediction practical Carnot cycle\nWHICH TWO GOVERNING BODIES HAVE LEGISLATIVE VETO POWER?\nWhile the Commision has a monopoly on initiating legislation, the European Parliament and the Council of the European Union have powers of amendment and veto during the legislative progress\nType of reasoning Lexical variation (world knowledge)\nGround truth the European Parliament and the Council of the European Union\nPrediction European Parliament and the Council of the European Union\nWHAT SHAKESPEARE SCHOLAR IS CURRENTLY ON THE UNIVERSITYS FACULTY?\nCurrent faculty include the anthropologist Marshall Sahlins, historian Dipesh Chakrabarty, ... Shake. speare scholar David Bevington, and renowned political scientists John Mearsheimer and Robert Pape.\nType of reasoning Syntactic variation\nPrediction David Bevington\nThe V&A Theatre & Performance galleries, formerly the Theatre Museum, opened in March 2009 The collections are stored by the V&A, and are available for research, exhibitions and other shows. They hold the UK's biggest national collection of material about live performance in the UK since Shakespeare's day, covering drama, dance, musical theatre, circus, music hall, rock and pop, and most other forms of live entertainment.\nType of reasoning Multiple sentence reasoning\nGround truth Material about live performance\nPrediction UK's biggest national collection of material about live performance in the UK since Shakespeare's day\nAlong with giving the offender his \"just deserts\"', achieving crime control via incapacitation and deterrence is a major goal of crime punishment.\nGround truth achieving crime control via incapacitation and deterrence\nPrediction achieving crime control via incapacitation and deterrence"}, {"section_index": "11", "section_name": "I: 572882242ca10214002da420", "section_text": "The Mongol rulers patronized the Yuan printing industry. Chinese printing technology was trans. ferred to the Mongols through Kingdom of Qocho and Tibetan intermediaries. Some Yuan docu ments such as Wang Zhen's Nong Shu were printed with earthenware movable type, a technology. invented in the 12th century. However, most published works were still produced through tradi tional block printing techniques. The publication of a Taoist text inscribed with the name of Tregene. Khatun, gedei's wife, is one of the first printed works sponsored by the Mongols. In 1273, the. Mongols created the Imperial Library Directorate, a government-sponsored printing office. The. Yuan government established centers for printing throughout China. Local schools and government. agencies were funded to support the publishing of books..\nGround truth through Kingdom of Qocho and Tibetan intermediaries\nPrediction: through Kingdom of Qocho and Tibetan intermediaries\nWHO APPOINTS ELDERS?"}, {"section_index": "12", "section_name": "ID 5730d473b7151e1900c0155b", "section_text": "Elders are called by God, affirmed by the church, and ordained by a bishop to a ministry of Word.. Sacrament, Order and Service within the church. They may be appointed to the local church, or to. other valid extension ministries of the church. Elders are given the authority to preach the Word of. God, administer the sacraments of the church, to provide care and counseling, and to order the life. of the church for ministry and mission. Elders may also be assigned as District Superintendents, and they are eligible for election to the episcopacy. Elders serve a term of 23 years as provisional Elders. prior to their ordination.\nGround truth bishop. the local church\nAN ALGORITHM FOR X WHICH REDUCES TO C WOULD ALLOW US TO DO WHAT?"}, {"section_index": "13", "section_name": "ID 56e1ce08e3433e14004231a6", "section_text": "This motivates the concept of a problem being hard for a complexity class. A problem X is hard for a class of problems C if every problem in C can be reduced to X. Thus no problem in C is harder than X, since an algorithm for X allows us to solve any problem in C. Of course, the notion of hard problems depends on the type of reduction being used. For complexity classes larger than P polynomial-time reductions are commonly used. In particular, the set of problems that are hard for NP is the set of NP-hard problems."}, {"section_index": "14", "section_name": "ID 572fd7b8947a6a140053cd3e", "section_text": "Parliamentary time is also set aside for question periods in the debating chamber. A \"General Ques tion Time\"' takes place on a Thursday between 11:40 a.m. and 12 p.m. where members can direc questions to any member of the Scottish Government. At 2.3Opm, a 40-minute long themed 'Ques tion Time\"' takes place, where members can ask questions of ministers in departments that are se lected for questioning that sitting day, such as health and justice or education and transport. Betweer 12 p.m. and 12:30 p.m. on Thursdays, when Parliament is sitting, First Minister's Question Time takes place. This gives members an opportunity to question the First Minister directly on issues under their jurisdiction. Opposition leaders ask a general question of the First Minister and ther supplementary questions. Such a practice enables a \"lead-in\"' to the questioner, who then uses theij supplementary question to ask the First Minister any issue. The four general questions available tc opposition leaders are:\nWHAT ARE SOME OF THE ACCEPTED GENERAL PRINCIPLES OF EUROPEAN UNION LAW?"}, {"section_index": "15", "section_name": "ID 5726a00cf1498d1400e8e551", "section_text": "The principles of European Union law are rules of law which have been developed by the European. Court of Justice that constitute unwritten rules which are not expressly provided for in the treaties but. which affect how European Union law is interpreted and applies. In formulating these principles, the. courts have drawn on a variety of sources, including: public international law and legal doctrines and. principles present in the legal systems of European Union member states and in the jurisprudence of the European Court of Human Rights. Accepted general principles of European Union Law include fundamental rights (see human rights), proportionality, legal certainty, equality before the law and. subsidiarity.\nGround truth fundamental rights (see human rights), proportionality, legal certainty, equality be fore the law and subsidiarity.\nPrediction fundamental rights (see human rights), proportionality, legal certainty, equality befor the law and subsidiarity\nOn 24 March 1879, Tesla was returned to Gospi under police guard for not having a residence permit. On 17 April 1879, Milutin Tesla died at the age of 60 after contracting an unspecified illness. (although some sources say that he died of a stroke). During that year, Tesla taught a large class of students in his old school, Higher Real Gymnasium, in Gospi..\nGround truth not having a residence permit\nPrediction not having a residence permit\nEuropean Union law is applied by the courts of member states and the Court of Justice of the Euro pean Union. Where the laws of member states provide for lesser rights European Union law can be\nenforced by the courts of member states. In case of European Union law which should have been transposed into the laws of member states, such as Directives, the European Commission can take proceedings against the member state under the Treaty on the Functioning of the European Union. The European Court of Justice is the highest court able to interpret European Union law. Supple- mentary sources of European Union law include case law by the Court of Justice, international law and general principles of European Union law.\nPrediction case law by the Court of Justice\nComment The prediction produced by the model is correct, however it was not selected by Mechan ical Turk annotators.\nWHO DESIGNED THE ILLUMINATION SYSTEMS THAT TESLA ELECTRIC LIGHT & MANUFACTURING INSTALLED?"}, {"section_index": "16", "section_name": "I 56e0d6cf231d4119001ac424", "section_text": "After leaving Edison's company Tesla partnered with two businessmen in 1886, Robert Lane anc Benjamin Vail, who agreed to finance an electric lighting company in Tesla's name, Tesla Electric Light & Manufacturing. The company installed electrical arc light based illumination systems de signed by Tesla and also had designs for dynamo electric machine commutators, the first patents issued to Tesla in the US\nComment The model produces an incorrect prediction that corresponds to people that funded Tesla. instead of Tesla who actually designed the illumination system. Empirically, we find that mos mistakes made by the model have the correct type (eg. named entity type) despite not including. types as prior knowledge to the model. In this case, the incorrect response has the correct type of. person."}, {"section_index": "17", "section_name": "ID 57265746dd62a815002e821a", "section_text": "Cydippid ctenophores have bodies that are more or less rounded, sometimes nearly spherical and other times more cylindrical or egg-shaped; the common coastal ''sea gooseberry,\" Pleurobrachia sometimes has an egg-shaped body with the mouth at the narrow end, although some individuals are more uniformly round. From opposite sides of the body extends a pair of long, slender tentacles each housed in a sheath into which it can be withdrawn. Some species of cydippids have bodies that are flattened to various extents, so that they are wider in the plane of the tentacles."}, {"section_index": "18", "section_name": "Prediction spherical", "section_text": "Comment Although the mistake is subtle, the prediction is incorrect. The statement \"are more ol less rounded, sometimes nearly spherical\"' suggests that the entity is more often \"rounded\" than. \"spherical' or \"cylindrical' or \"egg-shaped' (an answer given by an annotator). This suggests that. the model has trouble discerning among multiple intuitive answers due to a lack of understanding of. the relative severity of \"more or less'' yersus \"sometimes'' and \"other times''."}] |
r1BJLw9ex | [{"section_index": "0", "section_name": "ADJUSTING FOR DROPOUT VARIANCE IN BATCH NORMALIZATION AND WEIGHT INITIALIZATION", "section_text": "Dan Hendrycks\nKevin Gimpel\nWe show how to adjust for the variance introduced by dropout with corrections tc weight initialization and Batch Normalization, yielding higher accuracy. Thougl dropout can preserve the expected input to a neuron between train and test, the variance of the input differs. We thus propose a new weight initialization by cor recting for the influence of dropout rates and an arbitrary nonlinearity's influence on variance through simple corrective scalars. Since Batch Normalization trainec with dropout estimates the variance of a layer's incoming distribution with some inputs dropped, the variance also differs between train and test. After training a network with Batch Normalization and dropout, we simply update Batch Normal ization's variance moving averages with dropout off and obtain state of the art or CIFAR-10 and CIFAR-100 without data augmentation."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Weight initialization and Batch Normalization greatly influence a neural network's ability to learn Both methods can allow for a unit-variance neuron input distribution. This is desirable because variance larger or smaller than one may cause activation outputs to explode or vanish. In order to encourage unit-variance, early weight initialization attempts sought to adjust for a neuron's fan-in (LeCun et al.]1998). More recent initializations correct for a neuron's fan-out (Glorot & Bengio 2010). Meanwhile, some weight initializations compensate for the compressiveness of the ReLU nonlinearity (the ReLU's tendency to reduce output variance) (He et al.2015). Indeed, He et al. (2015) also show that initializations without a specific, small corrective factor can render a neural network untrainable. To address this issue Batch Normalization (Ioffe & Szegedy2015) reduces the role of weight initialization at the cost of up to 30% more computation (Mishkin & Matas2015) A less computationally expensive solution is the LSUV weight initialization, yet this still requires computing batch statistics, a special forward pass, and makes no adjustment for backpropagation error signal variance (Mishkin & Matas2015). Similarly, weight normalization uses a special feed- forward pass and computes batch statistics (Salimans & Kingma|2016). The continued development of variance stabilizing techniques testifies to its importance for neural networks.\nBoth Batch Normalization and previous weight initializations do not accommodate the variance in troduced by dropout, and we contribute methods to fix this. First we demonstrate a new weight initialization technique which includes a new correction factor for a layer's dropout rate and adjusts for an arbitrary nonlinearity's effect on the neuron output variance. All of this is obtained with out computing batch statistics or special adjustments to the forward pass, unlike recent methods tc control variance (Ioffe & Szegedy2015} Mishkin & Matas2015} Salimans & Kingma2016). By this new initialization, we enable faster and more accurate convergence. Afterward, we show thai networks trained with Batch Normalization can improve their accuracy by adjusting for dropout's variance. We accomplish this by training a network with both Batch Normalization and dropout then after training we feed forward the training dataset with dropout off to reestimate the Batch Normalization variance estimates. Because of this simple, general technique, we obtain state of the art on CIFAR-10 and CIFAR-100 without data augmentation.\n*Work done while the author was at TTIC. Code available at github.com/hendrycks/init\nToyota Technological Institute at Chicago kgimpel@ttic.edu"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In this section, we derive our new initialization by considering a neuron input distribution and its major sources of variance. We accomplish this by separately considering the feedforward and the backpropagation stages"}, {"section_index": "3", "section_name": "2.1.1 THE FORWARD PASS", "section_text": "We use f to denote the pointwise nonlinearity in each neural network layer. For simplicity, we use the term \"neuron'' to refer to an entry in a layer before applying f. Let us also call the input of the l-th layer z'-1, and let the nin nout weight matrix W' map from layer l 1 to l. Let an entry of this matrix be w'. In our upcoming initialization, we initialize each column of W' on the unit hypersphere so that each column has an l2 norm of 1. Now, if we assume that this network is trained with a dropout keep rate of p, we must scale the output of a layer by 1/p. Also assume f(z'-1) and W are zero-centered. With that now specified, we conclude that neuron i of layer z' has the variance\nbecause Var(w') = 1/nin, since we initialized Wl's columns on the unit hypersphere. Knowing thi variance allows us to adjust for the influence of an arbitrary nonlinearity and a desired dropout rate\nWe empirically verify that a weight initialization with this forward correction allows for consisten. input distribution variance throughout the layers of a 20-layer fully connected network for differ. ing dropout rates. The first 15 layers have 500 neurons, and the last 5 layers have 250 neurons. Specifically, we can encourage unit variance by dividing W, initialized on the unit hypersphere by /E(f(zl-1)2)/p. Let us compare this correction to other initializations by feeding forward a. random standard normal matrix through 20 layers. Figure 1 shows the results of such an experi. ment, and in the experiment we use a ReLU activation function. Of course, as He initialization was designed specifically for the ReLU, it performs well when p = 1, but has an exploding distributior. when there is dropout. Only the initialization with a /E(f(zl-1)2)/p corrective term demonstrates stability when a feedforward does or does not use dropout..\nOL is shows that if L is our loss function and s'- then Var(8') ~ pnoutVar(wl+1gl+1 f' )) = pE[f(z)2]\nIn appendix[A|we empirically verify that this backward correction allows for consistent backpropa gation error signal variance throughout the layers of a 20-layer network for differing dropout rates."}, {"section_index": "4", "section_name": "2.2 OUR INITIALIZATION", "section_text": "We want that Var(z%) = 1 and Var(s) = Var(sl+1). To meet these different goals, we can initialize our weights by adding these variances, while others take the arithmetic mean of these variances or ignore the backpropagation variance altogether (He et al.]2015] Glorot & Bengio.2010). Therefore, if W' has its columns sampled uniformly from the surface of a unit hypersphere or is an orthonormal matrix, then our initialization is\nE[f(zl-1)2]/p+pE[f'(z)2]\nFor convolutional neural networks, adjusting for the backpropagation signal is less common, so one could simply use the initialization Wl //E[f(zl-1)21/p. This initialization accounts for the influ ence of dropout rates and an arbitrary nonlinearity. We can simply initialize a random standarc\nNin = Var >`w Var( k=1 NinP E[f7\nInit with Forward Correction Xavier Initialization He Initialization 500 - 0.30 - Layer 5 Layer 5 Layer 5 0.4 Layer 10 Layer 10 Layer 10 (I=d) 0.25 Layer 15 400 Layer 15 Layer 15 Layer 20 Layer 20 Layer 20 0.3 0.20 300 0.15 0.2 200 0.10 0.1 100 0.05 0.0 0 0.00 -4 -2 0 2 4 -1.0 0.5 0.0 0.5 7.5 5.0 2.5 0.0 2.5 5.0 I Layer 5 8000 Layer 5 Layer 5 6 Layer 10 Layer 10 (t=d) Layer 10 Layer 15 Layer 15 5 Layer 15 6000 Layer 20 Layer 20 Layer 20 3 4 4000 2 2 2000 1 1 0 0 0 1 2 3 4 0.0 0.2 0.4 0.6 0.8 0 2 4 6 2.5 - 0.4 Layer 5 Layer 5 Layer 5 0.07 Layer 10 Layer 10 Layer 10 Layer 15 2.0 Layer 15 0.06 Layer 15 0.3 Layer 20 Layer 20 Layer 20 0.05 nnduy 1.5 0.04 0.2 1.0 0.03 0.1 0.02 0.5 0.01 0.0 0.0 0.00 -5 0 5 -2 0 2 -2000 1000 0 1000 2000 3.0 - Layer 5 Layer 5 Layer 5 12 0.5 6 2.5 Layer 10 Layer 10 Layer 10 '0=d) Layer 15 Layer 15 Layer 15 10 Layer 20 Layer 20 0.4 Layer 20 2.0 8 0.3 1.5 6 0.2 1.0 4 0.5 0.1 2 0.0 - 0 0.0 0 5 10 0 1 2 3 4 5 0 1000 2000 3000\n12 10 1 8 6 4 2 0 0\nFigure 1: A comparison of a unit hypersphere initialization with a forward correction, the Xavier Initialization, and the He initialization. Each plot shows the probability density function of a neu. ron's inputs and outputs across layers. In particular, the range of values vary widely between each initialization, with exponential blowups and decay for He and Xavier initializations, respectively Values set to zero by dropout are removed from the probability density functions.\nActivation E(f(zl-1)2) zb2 Identity 1 1 ReLU 0.5 0.5 GELU ( = 0, = 1) 0.425 0.444 tanh 0.394 0.216 ELU (a = 1) 0.645 0.671\nTable 1: Activation adjustment estimates for z'-1, z' following a standard normal distribution\nGaussian matrix and normalize its last dimension to generate W'. Another strength of this initial ization is that the expectations are similar to the values in Table 1 for standardized input data, so computing mini-batch statistics is needless for our initialization (Hendrycks & Gimpel|2016). We need only substitute in appropriate scalars during initialization. Let us now see these new adjust ments in action.\nIn the experiments that follow, we utilize the MNIST dataset, a 10-class grayscale image dataset of. handwritten digits with 60k training examples and 10k test examples. Then we consider CIFAR-. 10 (Krizhevsky2009), a 10-class color image dataset with 50k training examples and 10k test. examples. We use these data to compare our initialization with Xavier and He initializations on a fully connected neural network and with the He initialization on a convolutional neural network..\nLet us verify that our initialization competes with previous weight initialization schemes. To this. end. we train a fully connected neural network with ReLUs. ELUs (a = 1). and the tanh activation (Clevert et al.]2016). Each 8-layer, 256 neuron wide neural network is trained for 25 epochs with a. batch size of 64. From the training set, 5o00 examples are held out for a validation set. With the val-. idation set, we tune over the learning rates {10-3, 10-4, 10-5} and 7 other learning rates randomly. chosen from [10-1, 10-5]. We optimize with Adam (Kingma & Ba2015). We also perform this task with no dropout, a dropout keep rate of 0.5, and a dropout keep rate of 0.3. Figure|2|indicates. that our initialization provides faster convergence at a dropout keep rate of 0.5 for activations like. the ReLU and great gains when the dropout keep rate decreases further..\nSince VOoNelarcnlle serman 2015) require considerable regulariza- tion and careful initialization, we use a highly regularized variant (Zagoruyko 2015) of the architecture for our next initialization exper- iment. The VGG Net-like network has the stacks (2 x 3 x 64),(2 x 3 128),(3 x 3 256),(3 3 512),(3 3 512) followed by two fully-connected layers, each with 512 neurons. To regularize the deep network, we keep 70% of the neurons in the first layer, 60% in layer 3, 60% in the first two layers of the last three stacks, and 50% in the fully con- nected layers. Max pooling occurs after every stack, ReLU activations are applied on every neuron, and we l2 regularize with a strength of 5 10-4. Layer width, filter count, l2 regular- ization strength, and dropout rate hyperparme- ters are from (Zagoruyko 2015). We compare our initialization (while deactivating the back-\nActivation E(f(zl-1)2) E(f'(z')2) Identity 1 1 ReLU 0.5 0.5 GELU ( = 0, = 1) 0.425 0.444 tanh 0.394 0.216 ELU (a = 1) 0.645 0.671\nGradient of Loss WRT First Layer on VGG Net 10 10 10- 10-5 10 10- Ours 10-8 He 0 20 40 60 80 100 120 140 Epoch\nFigure 3: Our weight initialization enables consis- tent, healthy gradients and a worse initialization may require dozens of epochs to begin optimizing. the first layer due to vanishing gradients\nLoss Curves (p = 1.0) Loss Curves (p = 0.5) Loss Curves (p = 0.3) 0.30 1.0 2.25 Ours (train) 0.9 0.25 Xavier (train) 2.00 He (train) 0.8 Ours (test) 1.75 0.7 Xavier (test) For 0.6 1.50 - He (test) Sso7 0.15 0.5 1.25 0.10 0.4 1.00 - 0.3 0.05 0.75 - 0.2 0.50 - 0.00 0.1 0 5 10 15 20 25 0 5 10 15 20 25 0 5 10 15 20 25 0.30 1.4 2.8 1.2 2.7 0.25 Rely 1.0 2.6 0.20 for 0.8 2.5 0.15 Sso7 0.6 2.4 2.3 0.4 0.05 0.2 2.2 0.00 2.1 0 5 10 15 20 25 0 5 10 15 20 25 5 10 15 0 20 25 0.30 1.0 2.0 1.8 0.25 0.8 1.6 1.4 0.6 1.2 1.0 0.10 0.4 0.8 0.6 - 0.05 0.2 0.4 0.00 0.2 0 5 10 15 20 25 0 5 10 15 20 25 5 10 15 20 25 Epoch Epoch Epoch\nFigure 2: MNIST Classification Results. The first row shows the log loss curves for the tanh unit, the second row is for the ReLU, and the third the ELU. The leftmost column shows loss curve. when there is no dropout, the middle when the dropout rate is O.5, and rightmost is when the dropou preservation probability rate is 0.3. Each curve is selected by tuning over learning rates.\npropagation variance term) and the He initialization. Since the Xavier initialization is not as promi. nent in convolutional neural networks we do not test it. We optimize this network with two differen. optimizers. First, we use Nesterov momentum and tune with 10 learning rates, with 7 chosen ran-. domly from [10-1, 10-5] and three deterministically chosen from {10-2, 10-3, 10-4}. In separate runs, we train with the Adam optimizer and tune with 10 learning rates, with 7 chosen randomly. from [10-1, 10-5] and three deterministically chosen from {10-3, 10-4, 10-5}. With both opti mizers we decay the learning rate by 0.1 every at the 100th and 125th epoch all while training for. 150 epochs. The results in Figure4|demonstrate the importance of small corrective dropout factors because the factors' influence on neuron input variance changes exponentially as the network depth. increases, and this can lead to vanishing update signals as shown in Figure[3] We should note that. the He initialization rendered the network untrainable for more learning rates like when the learning. rate was O.01 with Nesterov momentum. However, the network converged with our initialization. at this learning rate. Ultimately, our initialization provided more consistent, quick, and accurate convergence. With the Adam Optimizer, the VGG Net obtained 9.51% test set error under our ini-. tialization and 10.54% with the He initialization. Last, we use Nesterov momentum, which is a more. common optimizer for deep convolutional neural networks. With Nesterov momentum, we obtained. 7.41% error with our initialization and 24.65% with the He initialization..\nBatch Normalization aims to prevent an exploding or vanishing feedforward signal just like a good weight initialization. However, Batch Normalization has its own caveats. A practical concern\n0.30 1.4 2.8 1.2 0.25 2.7 1.0 2.6 0.20 0.8 2.5 0.15 0.6 2.4 0.10 0.4 2.3 0.05 2.2 0.2 0.00 2.1 0 5 10 15 20 25 0 5 10 15 20 25 0 5 10 15 20 25 0.30 1.0 2.0 1.8 0.25 0.8 1.6 0.20 1.4 0.6 1.2 0.15 1.0 0.4 0.10 0.8 0.6 0.05 0.2 0.4 0.00 0.2 0 5 10 15 20 25 0 5 10 15 20 25 5 10 15 20 25 0 Epoch Epoch Epoch\nOurs Ours 1.6 1.6 He He 1.4 1.4 Wepv yo!M Sson 1.2 1.2 1.0 1.0 0.8 0.8 607 0.6 0.6 0.4 0.4 607 0.2 0.2 0 20 40 60 80 100 120 140 0 20 40 60 80 100 120 140 Epoch Epoch\nFigure 4: CIFAR-10 VGG Net Results. The left convergence curves show how the network trained with Nesterov momentum, and the right shows the training under the Adam optimizer. Training set log losses are the darker curves, and the fainter curves are the test set log loss curves..\nIn this section we empirically show that Batch Normalization with dropout also requires special car because Batch Normalization's estimated variance should differ between train and test. In our weigh initialization derivation section|2.1] we saw that the variance of a neuron's input grows when dropou is active. Consequently, Batch Normalization's variance estimates are greater when dropout is active because a neuron's input variance is greater with dropout. But dropout is deactivated during testing and the variance estimates Batch Normalization normally uses are accurate when dropout is activ not inactive. For this reason, we re-estimate the Batch Normalization variance parameters afte training. We accomplish this by simply feeding forward the training data with dropout deactivatec and only letting the Batch Normalization variance running averages update. When re-estimating the variance no backpropagation occurs. Let us now verify that re-estimating the Batch Normalizatior variance values improves test performance.\nIn our experiment, we turn our attention to state of the art convolutional neural networks as they use Batch Normalization and dropout. For example, Densely Connected Networks (DenseNets) (Huang et al.|2016) use dropout with Batch Normalization when training without data augmenta- tion. Training without data augmentation is of interest because it demonstrates how data efficient an architecture is, and some images have their meaning destroyed under augmentation like mirror- ing (e.g., the mirror image of the digit \"7\" is meaningless). Moreover,Zagoruyko & Komodakis (2016) use dropout even when there is data augmentation as Batch Normalization alone does not sufficiently regularize the network. Please note that in this experiment we are only testing the effect of Batch Normalization variance adjustments, and we are not testing the effect of different weight initializations.\nWe turn to DenseNets in this experiment because, to our knowledge, they hold the state of the art on CIFAR-10 and CIFAR-100 without data augmentation. We train a DenseNet with dropoul and Batch Normalization and re-estimate the Batch Normalization variance parameters outside of training to achieve large error reductions. These DenseNets are trained just as described in the original paper except that every 5 epochs we reset the momentum variable following a discussion with a DenseNet paper author, as this might improve accuracy. We save the DenseNet model when it has trained for half of the scheduled epochs (when it is \"Halfway') and when it is entirely done training. Then, using these models, we feed forward the training data with dropout off for one epoch\nvoiced in|Mishkin & Matas (2015), is that Batch Normalization can increase the feedforward time by up to 30%. Also, Ba et al.(2016) remind us that Batch Normalization cannot be applied to tasks with small batch sizes or online learning tasks lest we normalize a batch based upon mean and variance estimates from a small or single example. Batch Normalization can be used to stabilize the feedforward signal of a network with dropout, removing the need for a weight initialization which corrects for dropout variance. However, correcting for dropout's variance is still necessary It so happens that Batch Normalization requires that its own variance estimates be corrected before testing.\nwithout performing any backpropagation. While the data feeds forward, we only allow the Batcl Normalization moving average estimate of the variance to update. In no way does this variance re-estimation at the Halfway point affect future training because we do not train with these re estimated variance parameters. Now, DenseNets hold the state of the art on CIFAR-10 and CIFAR 100 without data augmentation. Specifically, they obtain 5.77% error on CIFAR-10 without data augmentation and 23.42% on CIFAR-100 without data augmentation. Table2 shows the results o Batch Normalization variance moving average re-estimation. The figure shows L, k, and p, whicl are the number of layers L, the growth factor k, and the dropout keep probability p. As an example of a table row, SVHN Original shows the error achieved in the original DenseNet paper. The rov below shows the DenseNet we trained at the Halfway point (\"Halfway Error') and at the end o training (\"Error'), and the error decreased under re-estimating the Batch Normalization variance The effect of updating the variance estimation is shown under columns with \"BN Update.' We see that simply feeding forward the training data without dropout and allowing the Batch Normalizatior variance moving averages to update lets us surpass the state of the art on CIFAR-10 and CIFAR-100 and can sometimes improve accuracy by more than 2%.\nDataset (Architecture) Halfway Halfway Error Error Error Error w/ BN w/ BN Update Update SVHN (L = 40,k = 12,p = 0.8) Original 1.79 SVHN (L = 40,k = 12,p = 0.8) Ours 5.18 4.19 1.92 1.85 CIFAR-10 (L = 100, k = 12,p = 0.8) Original 5.77 CIFAR-10 (L = 100, k = 12,p = 0.8) Ours 6.37 6.07 5.62 5.38 CIFAR-100 (L = 100, k = 24,p = 0.8) Original 23.42 CIFAR-100 (L = 100, k = 12,p = 0.8) Original 23.79 CIFAR-100 (L = 100, k = 12,p = 0.7) Ours 24.56 22.48 23.91 22.48 CIFAR-100 (L = 100, k = 12,p = 0.8) Ours 23.89 22.86 22.65 22.17"}, {"section_index": "5", "section_name": "4 DISCUSSION", "section_text": "In practice, if we lack an estimate for a nonlinearity adjustment factor, then O.5 is a reasonable default. A justification for a O.5 adjustment factor comes from connections to previous weight initializations. This is because if p = 1 and we default the adjustments to O.5, our initialization is the \"Xavier' initialization if we use vectors from within the unit hypercube rather than vectors on the unit hypersphere (Glorot & Bengio] 2010). Knowing this connection, we can therefore generalize. Xavier initialization to\nFurthermore, we can optionally exclude the backpropagation variance term-in this case, if p = 1 and f is a ReLU, our initialization is He's initialization if we use random normal weights (He et al.]2015). Note that since [He et al.(2015) considered a 0.5 corrective factor to account for the ReLU's compressiveness (its tendency to reduce output variance), it is plausible that E[f(zl-1)21 is a general adjustment for a nonlinearity's compressiveness. Since most neural network nonlinearities are compressive, O.5 is a reasonable default adjustment|Also recall that our initialization with the backpropagation variance term amounts is\n'Note that the first hidden layer is adjacent to neurons which can be viewed as having an identity activation For these, a 1.0 factor is more appropriate, but the practical difference is miniscule.\nTable 2: DenseNet Results with Batch Normalization Variance Re-Estimation. DenseNets without any Batch Normalization variance re-estimation are shown in \"Halfway Error\" and \"Error' columns Rows with \"Original' denote values from|Huang et al.(2016). Bold values indicate that the previous. state of the art is exceeded. For CIFAR-10 without data augmentation the previous state of the art. was 5.77% and for CIFAR-100 without data augmentation it was 23.42%.\nUnif[-1, 1] V3 p\nE[f(zl-1)2]/p+pE[f'(z)2]\nIf we use the O.5 corrective factor default and we do not apply any dropout, then we are left with Wl, an orthonormal matrix or a matrix with its columns on a unit hypersphere."}, {"section_index": "6", "section_name": "5 CONCLUSION", "section_text": "A simple modification to previous weight initializations shows marked improvements on fully con nected and convolutional architectures. Unlike recent variance stabilization techniques, ours onl relies on simple corrective factors and not special forward passes or batch statistics. For highly regularized networks, the convergence gains are conspicuous and networks without the correctiv factors are harder to train. Therefore, if a user wants to train online or not pay the computationa cost Batch Normalization imposes, he or she would do well to apply a dropout corrective factor t their weight initialization matrix.\nIf a user is able to use Batch Normalization, the effect of dropout still cannot be ignored. The Batch Normalization variance moving averages differ between train and test, so when training is complete, we re-estimate those variance parameters. This is accomplished by feeding the training data forward for one epoch without dropout on and only allowing the variance moving averages to change. By doing so, networks can improve their accuracy notably. Indeed, by applying this simple highly general technique, we achieved the state of the art on CIFAR-10 and CIFAR-100 without data augmentation."}, {"section_index": "7", "section_name": "ACKNOWLEDGMENTS", "section_text": "We would like to thank Eric Martin for numerous suggestions, Steven Basart for training the SVHN DenseNet, and our anonymous reviewers for suggestions. We would also like to thank NVIDIA. Corporation for donating several TITAN X GPUs used in this research.."}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. Layer normalization. In arXiv, 2016\nDan Hendrycks and Kevin Gimpel. Bridging nonlinearities and stochastic regularizers with Gaus sian error linear units. In arXiv. 2016..\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training b. reducing internal covariate shift. In International Conference on Machine Learning, 2015..\nDiederik Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. International Conference for Learning Representations, 2015.\nAlex Krizhevsky. Learning Multiple Layers of Features from Tiny Images, 2009.\nYann LeCun, Leon Bottou, Genevieve B. Orr, and Klaus-Robert Muller. Efficient backprop. In Neural Networks: Tricks of the trade, Springer, 1998..\nDmytro Mishkin and Jiri Matas. All you need is a good init. In International Conference on Learnin? Representations, 2015.\nTim Salimans and Diederik P. Kingma.. Weight normalization: A simple reparameterization t. accelerate training of deep neural networks. In Neural Information Processing Systems, 2016.\nSergey Zagoruyko. 92.45% on CIFAR-10 in Torch. 2015\nSergey Zagoruyko and Nikos Komodakis. Wide residual networks. British Machine Vision Confer ence, 2016."}, {"section_index": "9", "section_name": "A A RANDOM BACKPROPAGATION", "section_text": "Init with Backward Correction Xavier Initialization He Initialization 40 Layer 5 Layer 5 50 Layer 5 Layer 10 Layer 10 Layer 10 35 Layer 15 40000 Layer 15 Layer 15 Layer 20 Layer 20 Layer 20 40 30 (t=d) 30000 25 30 fhndus 20 20000 15 20 - 10 10000 10 5 0 0.05 0.00 0.05 0.10 0.010 0.005 0.000 0.005 0.060.040.020.000.020.04 0.06 Init with Backward Correction 1e7 Xavier Initialization He Initialization 50 Layer 5 Layer 5 40 Layer 5 Layer 10 Layer 10 Layer 10 Layer 15 Layer 15 35 Layer 15 Layer 20 0.8 Layer 20 Layer 20 40 30 - 6 0=1 30 0.6 25 I Shndul uapp!H 20 - 20 0.4 15 10 10 0.2 5 0 0.0 0 0.0750.050-0.0250.000 0.025 0.050 0.075 -0.003-0.002-0.0010.0000.0010.0020.003 0.075-0.050-0.0250.000 0.0250.050 0.075\nWe can \"feed backward'' a random Gaussian matrix with standard deviation 0.01 and see how dif. ferent initializations affect the distribution of error signals for each layer. Figure5|shows the results when the backward correction factor is pE[f'(z')2]. Again, we used the ReLU activation function due to its widespread use, so the He initialization performs considerably well.\nFigure 5: A comparison of a unit hypersphere initialization with a backward correction, the Xavier initialization, and the He initialization. Values set to zero by dropout are removed from the normal. ized histograms. The first row has a dropout keep probability of 1.0, the second 0.6.."}] |
B1vRTeqxg | [{"section_index": "0", "section_name": "LEARNING CONTINUOUS SEMANTIC REPRESENTATIONS OF SYMBOLIC EXPRESSIONS", "section_text": "Miltiadis Allamanis1, Pankaj\n{m.allamanis,pankajan.chanthirasegaran, csutton}@ed.ac.uk pkohli@microsoft.com\nThe question of how procedural knowledge is represented and inferred is a funda. mental problem in machine learning and artificial intelligence. Recent work on. program induction has proposed neural architectures, based on abstractions like. stacks, Turing machines, and interpreters, that operate on abstract computational. machines or on execution traces. But the recursive abstraction that is central tc procedural knowledge is perhaps most naturally represented by symbolic represen. tations that have syntactic structure, such as logical expressions and source code. Combining abstract, symbolic reasoning with continuous neural reasoning is a. grand challenge of representation learning. As a step in this direction, we propose. a new architecture, called neural equivalence networks, for the problem of learn. ing continuous semantic representations of mathematical and logical expressions. These networks are trained to represent semantic equivalence, even of expressions. that are syntactically very different. The challenge is that semantic representations. must be computed in a syntax-directed manner, because semantics is compositional. but at the same time, small changes in syntax can lead to very large changes in. semantics, which can be difficult for continuous neural architectures. We perform. an exhaustive evaluation on the task of checking equivalence on a highly diverse. class of symbolic algebraic and boolean expression types, showing that our model. significantly outperforms existing architectures."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "However, the recursive abstraction that is central to procedural knowledge is perhaps most naturall represented not by abstract models of computation, as in that work, but by symbolic representations that have syntactic structure, such as logical expressions and source code. One type of evidence foi this claim is the simple fact that people communicate algorithms using mathematical formulae and. pseudocode rather than Turing machines. Yet, apart from some notable exceptions (Alemi et al., 2016 Piech et al., 2015; Allamanis et al., 2016; Zaremba & Sutskever, 2014), symbolic representations of procedures have received relatively little attention within the machine learning literature as a source of information for representing procedural knowledge.\nIn this paper, we address the problem of learning continuous semantic representations (SemVecs) of. symbolic expressions. The goal is to assign continuous vectors to symbolic expressions in such a. way that semantically equivalent, but syntactically diverse expressions are assigned to identical (or"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Representing and learning knowledge about the world requires not only learning declarative know. edge about facts but also procedural knowledge, knowledge about how to do things, which car be complex yet difficult to articulate explicitly. The goal of building systems that learn procedura. <nowledge has motivated many recent architectures for learning representations of algorithms (Grave. et al., 2014; Reed & de Freitas, 2016; Kaiser & Sutskever, 2016). These methods generally learr. from execution traces of programs (Reed & de Freitas, 2016) or input-output pairs generated from. a program (Graves et al., 2014; Kurach et al., 2015; Riedel et al., 2016; Grefenstette et al., 2015 Neelakantan et al., 2015).\nhighly similar) continuous vectors, when given access to a training set of pairs for which semantic equivalence is known. This is an important but hard problem; learning composable SemVeCs of. symbolic expressions requires that we learn about the semantics of symbolic elements and operators and how they map to the continuous representation space, thus encapsulating implicit knowledge. about symbolic semantics and its recursive abstractive nature..\nOur work in similar in spirit to the work of Zaremba & Sutskever (2014), who focus on learning expression representations to aid the search for computationally efficient identities. They use recursive neural networks (TREENN)' (Socher et al., 2012) for modelling homogenous, single-variable polynomial expressions. While they present impressive results, we find that the TREeNN model fails vhen applied to more complex symbolic polynomial and boolean expressions. In particular, in ou experiments we find that TREENNs tend to assign similar representations to syntactically similar expressions, even when they are semantically very different. The underlying conceptual problem is how to develop a continuous representation that follows syntax but not too much, that respects compositionality while also representing the fact that a small syntactic change can be a large semantic One.\nTo tackle this problem, we propose a new architecture, called neural equivalence networks (EQNETs EQNeTs learn how syntactic composition recursively composes SemVECs, like a TREeNN, bu are also designed to model large changes in semantics as the network progresses up the syntax tree As equivalence is transitive, we formulate an objective function for training based on equivalenc classes rather than pairwise decisions. The network architecture is based on composing residual-lik multi-layer networks, which allows more flexibility in modeling the semantic mapping up the synta tree. To encourage representations within an equivalence class to be tightly clustered, we als introduce a training method that we call subexpression forcing, which uses an autoencoder to forc the representation of each subexpression to be predictable from its syntactic neighbors. Experimenta evaluation on a highly diverse class of symbolic algebraic and boolean expression types shows tha EQNETs dramatically outperform existing architectures like TREENNs and RNNs.\nTo summarize, the main contributions of our work are: (a) We formulate the problem of learning. continuous semantic representations (SEmVECs) of symbolic expressions and develop benchmarks. for this task. (b) We present neural equivalence networks (EQNeTs), a neural network architecture. that learns to represent expression semantics onto a continuous semantic representation space and. how to perform symbolic operations in this space. (c) We provide an extensive evaluation on boolean. and polynomial expressions, showing that EQNETs perform dramatically better than state-of-the-art. alternatives. Code and data are available at qroups. inf.ed.ac.uk/cup/semvec"}, {"section_index": "3", "section_name": "2 MODEL", "section_text": "In this work, we are interested in learning semantic, composable representations of mathematica expressions (SemVeC) and learn to generate identical representations for expressions that are semantically equivalent, i.e. they belong to the same equivalence class. Equivalence is a stronger property than similarity that is habitually learned by neural networks, since equivalence is additionally a transitive relationship.\nProblem Hardness. Finding the equivalence of arbitrary symbolic expressions is a NP-hard prob lem or worse. For example, if we focus on boolean expressions, reducing an expression to the representation of the fa1 se equivalence class amounts to proving its non-satisfiability - an NP complete problem. Of course, we do not expect to circumvent an NP-complete problem with neura networks. A network for solving boolean equivalence would require an exponential number of node. in the size of the formula if P N P. Instead, our goal is to develop architectures whose inductive biases allow them to efficiently learn to solve the equivalence problems for expressions that are similar to a smaller number of expressions in a given training set. This requires that the network learn identical representations for expressions that may be syntactically different but semantically equivalent and also discriminate between expressions that may be syntactically very similar but are non-equivalent. Appendix A shows a sample of such expressions that illustrate the hardness of thi. problem.\n1To avoid confusion, we use TREeNN for recursive neural networks and retain RNN for recurrent neura networks.\nNotation and Framework. We employ the general framework of recursive neural networks. (TREENN) (Socher et al., 2012; 2013) to learn to compose subtree representations into a singl representation. The TREeNNs we consider operate on tree structures of the syntactic parse of a. formula. Given a tree T, TREENNs learn distributed representations by recursively computing th representations of its subtrees. We denote the children of a node n as ch(n) which is a (possibl empty) ordered tuple of nodes. We also use par(n) to refer to the parent node of n. Each node in ou. tree has a type, e.g. a terminal node could be of type \"a\"' referring to the variable a or of type \"and. referring to a node of the logical and () operation. We refer to the type of a node n as Tn. At a higl. level, TREeNNs retrieve the representation of a tree T rooted at node p, by invoking TREENeT(p that returns a vector representation r, E RD, i.e., a SEmVEC, using the function.\nThe general framework of TREENeT allows two points of variation, the implementation of LookU. PLEAFEMBEDDING and COMBINE. The traditional TREENNs (Socher et al., 2O13) define LOOKU- PLEAFEmBEDDING as a simple lookup operation within a matrix of embeddings and CoMBINE as a single-layer neural network. As discussed next, these will both prove to be serious limitations in our Setting."}, {"section_index": "4", "section_name": "2.1 NEURAL EOUIVALENCE NETWORKS", "section_text": "We now define the neural equivalence networks (EQNeT) that learn to compose representations of equivalence classes into new equivalence classes (Figure 1a). Our network follows the TREeNN architecture, that is, our EQNETs are implemented using the TREENET, so as to model the compo. sitional nature of symbolic expressions. However, the traditional TREENNs (Socher et al., 2013) use a single-layer neural network at each tree node. During our preliminary investigations and in Section 3, we found that single layer networks are not expressive enough to capture many operations even a simple XOR boolean operator, because representing these operations required high-curvature operations in the continuous semantic representation space. Instead, we turn to multi-layer neural networks. In particular, we define the ComBiNE in Figure 1b. This uses a two-layer MLP with a residual-like connection to compute the SemVeC of each parent node in that syntax tree given that of its children. Each node type Tn, e.g., each logical operator, has a different set of weights. We experimented with deeper networks but this did not yield any improvements. However, as TREENN\nSymbolic Expression Parse Tree SEMVEC 17112 enforce similarity rc-rc,rc-rc,rp-rp out l1 COMBINE lo CO COMBINE noise SUBEXPFORCE\n(a) Architectural diagram of EQNeTs. Example parse tree shown is of the boolean expression (a V c) ^ a\nxrco x tanh(Wd. tanh(We,tn: [rn,x] . n) xx:|x||2/l|x||2 + Wo1,Tn rn (COMBINE(x, Tn) return -(x'x+rrn) 2\nbecome deeper, they suffer from optimization issues, such as diminishing and exploding gradients This is essentially because of the highly compositional nature of tree structures, where the same network (i.e. the COmBiNE non-linear function) is used recursively, causing it to \"echo\"' its owr errors and producing unstable feedback loops. We observe this problem even with only two-laye MLPs, as the overall network can become quite deep when using two layers for each node in the Syntax tree.\nWe resolve this issues in a few different ways. First, we constrain each SemVec to have unit norm That is, we set LOOKUPLEAFEmBEDDING(Tn) = Cn./ ||Cnll2, and we normalize the output of the final layer of ComBinE in Figure 1b. The normalization step of lout and Crn is somewhat similar. to layer normalization (Ba et al., 2016), although applying layer normalization directly did not work for our problem. Normalizing the SemVECs partially resolves issues with diminishing and exploding. gradients, and removes a spurious degree of freedom in the semantic representation. As simple as this modification may seem, we found that it was vital to obtaining effective performance, and all of. our multi-layer TREENNs converged to low-performing parameters without it..\nHowever this modification is not sufficient, since the network may learn to map expressions from the same equivalence class to multiple SemVeCs in the continuous space. We alleviate this problem using a method that we call subexpression forcing that guides EQNeT to cluster its output to one location per equivalence class. We encode each parent-children tuple [rco,..., rc, rn] containing the (computed) representations of the children and parent node into a low-dimensional space using a denoising autoencoder. We then seek to minimize the reconstruction error of the child representations (rco,..., rc) as well as the reconstructed parent representation rn that can be computed from the reconstructed children. Thus more formally, we minimize the return value of SuBExpFoRCE in Figure 1c where n is a binary noise vector with k percent of its elements set to zero. Note that the encoder is specific to the type of Tn. Although our SuBExpFoRcE may seem similar to the recursive autoencoders of Socher et al. (2011) it differs significantly in form and purpose. since it acts as an autoencoder on the whole parent-children representation tuple and the encoding is not used within the computation of the parent representation. In addition, this constraint has two effects. It forces each parent-children tuple to \"live\" in a low-dimensional space, providing a clustering-like behavior. Second, it implicitly joins distinct locations that belong to the same equivalence class. To illustrate the latter point, imagine two semantically equivalent c' and c' child nodes of different nodes that have two geometrically distinct representations rc% - rc%, > e and ComBINE(rco,...) ~ ComBINE(rc%',...). In some cases due to the autoencoder noise, the differences between the input tuple x', x\" that contain rc. and rc will be non-existent and the decoder will be forced to predict a single location rco (possibly different from rc. and rcg). Then, when minimizing the reconstruction error, both rc. and rcy will be attracted to rco and eventually should merge."}, {"section_index": "5", "section_name": "2.2 TRAINING", "section_text": "We train EqNeTs from a dataset of expressions whose semantic equivalence is known. Given. a training set T = {Ti...Ty} of parse trees of expressions, we assume that the training set is partitioned into equivalence classes E = {e1...ej}. We use a supervised objective similar. to classification; the difference between classification and our setting is that whereas standard. classification problems consider a fixed set of class labels, in our setting the number of equivalence. classes in the training set will vary with N. Given an expression tree T that belongs to the equivalence. class e; E &, we compute the probability\nwhere qe, are model parameters that we can interpret as representations of each equivalence classes that appears in the training class, and b, are bias terms. Note that in this work, we only use information about the equivalence class of the whole expression T, ignoring available information about subexpressions. This is without loss of generality, because if we do know the equivalence class of a subexpression of T, we can simply add that subexpression to the training set. Directly maximizing P(e;[T) would be bad for EQNeT since its unit-normalized outputs cannot achieve high probabilities within the softmax. Instead, we train a max-margin objective that maximizes\nexp(TREENN(T)`qe, + bi P(ei\\T) C, exp(TREENN(T) qe, + bj\nLAcc(T, ei) = max 0, arg max log P(e;[T) log P(e[T) + m ejFei,ejEE\nInstead of the supervised objective that we propose, an alternative option for training EQNeT would. be a Siamese objective (Chopra et al., 2005) that learns about similarities (rather than equivalence between expressions. In practice, we found the optimization to be very unstable, yielding suboptimal. performance. We believe that this has to do with the compositional and recursive nature of the task that creates unstable dynamics and the fact that equivalence is a stronger property than similarity..\nBaselines. To compare the performance of our model, we train the following baselines. TF-IDF:. learns a representation given the tokens of each expression (variables, operators and parentheses). This can capture topical/declarative knowledge but is unable to capture procedural knowledge. GRU. refers to the token-level gated recurrent unit encoder of Bahdanau et al. (2015) that encodes the. token-sequence of an expression into a distributed representation. Stack-augmented RNN refers. to the work of Joulin & Mikolov (2015) which was used to learn algorithmic patterns and uses a. stack as a memory and operates on the expression tokens. We also include two recursive neural. network (TREENN) architectures. The 1-laver TREENN which is the origina1 TREENN also used\nwhere m > 0 is a scalar margin. And therefore the optimized loss function for a single expression tree T that belongs to equivalence class e; E is\nC(T,ei) = LAcc(T, ei) + SUBEXPFORCE(ch(n),n Q nEQ\nwhere Q = {n E T : ch(n)]> 0}, i.e. contains the non-leaf nodes of T and E (0, 1| a scalar weight. We found that subexpression forcing is counterproductive early in training, before the SemVecs begin to represent aspects of semantics. So, for each epoch t, we set = 1 - 10-vt with. V 0.\nDatasets. We generate datasets of expressions grouped into equivalence classes from two domains The datasets from the BooL domain contain boolean expressions and the PoLy datasets contain polynomial expressions. In both domains, an expression is either a variable, a binary operator that combines two expressions, or a unary operator applied to a single expression. When defining equivalence, we interpret distinct variables as referring to different entities in the domain, so that, e.g., the polynomials c . (a : a + b) and f . (d : d + e) are not equivalent. For each domain, we generate 'simple\"' datasets which use a smaller set of possible operators and \"standard' datasets which use a larger set of more complex operators. We generate each dataset by exhaustively generating all parse trees up to a maximum tree size. All expressions are then simplified into a canonical from in order to determine their equivalence class and are grouped accordingly. Table 1 shows the datasets we generated. We also present in Appendix A some sample expressions. For the polynomial domain, we also generated OneV-PoLy datasets, which are polynomials over a single variable, since they are similar to the setting considered by Zaremba & Sutskever (2014) - although ONEV-PoLY is still a little more general because it is not restricted to homogeneous polynomials. Learning SemVeCs for boolean expressions is already a hard problem; with n boolean variables, there are 22\" equivalence classes (i.e. one for each possible truth table). We split the datasets into training, validation and test sets. We create two test sets, one to measure generalization performance on equivalence classes that were seen in the training data (SeENEQCLAss), and one to measure generalization to unseen equivalence classes (UNsEENEQCLASs). It is easiest to describe UNsEEnEQCLASs first. To create the UNsEENEQCLAss, we randomly select 20% of all the equivalence classes, and place all of their expressions in the test set. We select equivalence classes only if they contain at least two expressions but less than three times the average number of expressions per equivalence class. We thus avoid selecting very common (and hence trivial to learn) equivalence classes in the testset. Then, to create SeENEQCLAss, we take the remaining 80% of the equivalence classes, and randomly split the expressions in each class into training, validation, SeenEQCLAss test in the proportions 60%-15%-25%. We provide the datasets online.\nDataset # # Equiv # H score5 (%) in UNSEENEQCLASS Vars Classes Exprs tf-idf GRU Stack TREENN EQNET RNN 1-L 2-L SIMPBOOL8 3 120 39,048 5.6 17.4 30.9 26.7 27.4 25.5 97.4 SIMPBOOL10S 3 191 26,304 7.2 6.2 11.0 7.6 25.0 93.4 99.1 BOOL5 3 95 1,239 5.6 34.9 35.8 12.4 16.4 26.0 65.8 BOOL8 3 232 257,784 6.2 10.7 17.2 16.0 15.7 15.4 58.1 BOOL10S 10 256 51,299 8.0 5.0 5.1 3.9 10.8 20.2 71.4 SIMPBOOLL5 10 1,342 10,050 9.9 53.1 40.2 50.5 3.48 19.9 85.0 BOOLL5 10 7,312 36,050 11.8 31.1 20.7 11.5 0.1 0.5 75.2 SIMPPOLY5 3 47 237 5.0 21.9 6.3 1.0 40.6 27.1 65.6 SIMPPOLY8 3 104 3,477 5.8 36.1 14.6 5.8 12.5 13.1 98.9 SIMPPOLY10 3 195 57,909 6.3 25.9 11.0 6.6 19.9 7.1 99.3 ONEV-POLY10 1 83 1,291 5.4 43.5 10.9 5.3 10.9 8.5 81.3 ONEV-POLY13 1 677 107,725 7.1 3.2 4.7 2.2 10.0 56.2 90.4 POLy5 3 150 516 6.7 37.8 34.1 2.2 46.8 59.1 55.3 POLy8 3 1,102 11,451 9.0 13.9 5.7 2.4 10.4 14.8 86.2\nSDatasets are sampled at uniform from all possible expressions, and include all equivalence classes bu. sampling 200 expressions per equivalence class if more expressions can be formed\nby Zaremba & Sutskever (2014). We also include a 2-layer TREENN, where ComBINE is a classic two-layer MLP without residual connections. This shows the effect of SemVeC normalization and subexpression forcing\nHyperparameters. We tune the hyperparameters of the baselines and EqNET using Bayesia. optimization (Snoek et al., 2012), optimizing on a boolean dataset with 5 variables and maximun. tree size of 7 (not shown in Table 1). We use the average k-NN (k = 1, . .., 15) statistics (describe next) as an optimization metric. The selected hyperparameters are detailed in Appendix C..\nMetric. To evaluate the quality of the learned representations we count the proportion of k neares neighbors of each expression (using cosine similarity) that belong to the same equivalence class. Mor formally, given a test query expression q in an equivalence class c we find the k nearest neighbors. Ng(q) of q across all expressions, and define the score as.\nEvaluation. Figure 2 presents the average score; across the datasets for each model. Table 1 shows score5 of UnsEenEQCLAss for each dataset. Detailed plots can be found in Appendix B. It can be clearly seen that EQNeT performs better for all datasets, by a large margin. The only exception is PoLy5, where the two-layer TREeNN performs better. However, this may have to do with the smal size of the dataset. The reader may observe that the simple datasets (containing fewer operations and variables) are easier to learn. Understandably, introducing more variables increases the size of the represented space reducing performance. The tf-idf method performs better in settings where more variables are included, because it captures well the variables and operations used. Similai observations can be made for sequence models. The one and two layer TrEENNs have mixed performance; we believe that this has to do with exploding and diminishing gradients due to the deep and highly compositional nature of TREENNs. Although Zaremba & Sutskever (2014) consider a different problem to us, they use data similar to the ONEV-PoLY datasets with a traditional TREeNN architecture. Our evaluation suggests that EQNETs perform much better within the ONEV-POLY setting.\nTable 1: Dataset statistics and results. Simp datasets contain simple operators (\"^, V, -\" for Bool. and \"+, \" for PoLy) while the rest contain all operators (i.e. \", V, , , =>\" for BooL and \"+, . .\" for PoLy). is the XoR operator. The number in the dataset name is the maximum tree size of. the parsed expressions within that dataset. L refers to a \"larger' number of 10 variables. H refers to the entropy of equivalence classes.\n|Nx(q) Nc scorek min(k,[c)\nTo report results for a given testset, we simply average scoreg(q) for all expressions q in the testset\n100 100 100 100 10 10 5 10 5 10 5 10 5 10 k k k (a-i) SEENEQCLASS (a-ii) UNSEENEQCLASS (b-i) SEENEQCLASS (b-ii) UNSEENEQCLASS (a) Models trained and tested on same type of data.(b) Evaluation of compositionality; training set sim\nFigure 2: Average scorek (y-axis in log-scale). Markers are shown every three ticks for clarity TREENN refers to Socher et al. (2012). Detailed, per-dataset, plots can be found in Appendix B.\nFigure 3: Visualization of score5 for all expression nodes for three Bool10 and four PoLy8 test sample expressions using EQNeT. The darker the color, the lower the score, i.e. white implies a score of 1 and dark red a score of 0.\nEvaluation of Compositionality. We evaluate whether the EQNeTs have successfully learned tc compute compositional representations, rather than overfitting to expression trees of a small size We evaluate this by considering a type of transfer setting, in which we train on simpler datasets, bu tested on more complex ones; for example, training on the training set of BooL5 but testing on the testset of BooL8. We average over 11 different train-test pairs (full list in Figure 6) and present the results in Figure 2b-i and Figure 2b-ii (note the differences in scale to the two figures on the left These graphs again show that EQNeTs are dramatically better than any of the other methods, and indeed, performance is only a bit worse than in the non-transfer setting.\nImpact of EQNET Components EQNETs differ from traditional TREENNs in two major compo nents, which we analyze here. First, SuBExpFoRcE has a positive impact on performance. When training the network with and without subexpression forcing, on average, the area under the curve (AUC) of the scorek decreases by 16.8% on the SEENEQCLASS and 19.7% on the UNsEENEQ CLAss. This difference is smaller in the transfer setting of Figure 2b-i and Figure 2b-ii, where AUC decreases by 8.8% on average. However, even in this setting we observe that SuBExpFoRcE helps more in large and diverse datasets. The second key difference to traditional TREENNs is the output normalization at each layer. Comparing our model to the one-layer and two-layer TREENNs again we find that output normalization results in important improvements (the two-layer TREENNs have on average 60.9% smaller AUC)."}, {"section_index": "6", "section_name": "3.2 OUALITATIVE EVALUATION", "section_text": "Table 2 and Table 3 shows expressions whose SemVeC nearest neighbor is of an expression oi another equivalence class. Manually inspecting boolean expressions, we find that EQNeT confusions happen more when a XoR or implication operator is involved. In fact, we fail to find any confusec expressions for EqNeT not involving these operations in BooL5 and in the top 100 expressions in BooL10. As expected, tf-idf confuses expressions with others that contain the same operators anc\nTable 2: Non semantically equivalent first nearest-neighbors from BooL8. A checkmark indicates. that the method correctly results in the nearest neighbor being from the same equivalence class\nTable 3: Non semantically equivalent first nearest-neighbors from PoLy8. A checkmark indicate. that the method correctly results in the nearest neighbor being from the same equivalence class.\nExpression a+(c(a+c)) ((a+c)c)+ a (b.b)- b tf-idf a+c+a)c (ca)+(a+c) b.(b-b) GRU b+(c(a+c)) ((b+c)c)+a (b+b)b-b 1L-TREENN a+(c(b+c)) ((b+c)c)+a (a-c)b-b EqNET (b.b)b-b\nFigure 3 shows a visualization of score, for each node in the expression tree. One may see thai as EqNeT knows how to compose expressions that achieve good score, even if the subexpressions achieve a worse score. This suggests that for common expressions, (e.g. single variables and monomials) the network tends to select a unique location, without merging the equivalence classes or. affecting the upstream performance of the network. Larger scale interactive t-SNE visualizations can. be found online.\nFigure 4 presents two PCA visualizations of the learned embeddings of simple expressions and theii negations/negatives. It can be easily discerned that the black dots and their negations (in red) are easily discriminated in the semantic representation space. Figure 4b shows this property in a very clear manner: left-right discriminates between polynomials with a and -a, top-bottom between polynomials that contain b and -b and the diagonal y = x between c and -c. We observe a simila behavior in Figure 4a for boolean expressions."}, {"section_index": "7", "section_name": "4 RELATED WORK", "section_text": "Researchers have proposed compilation schemes that can transform any given program or expressior to an equivalent neural network (Gruau et al., 1995; Neto et al., 2003; Siegelmann, 1994). One car consider a serialized version of the resulting neural network as a representation of the expression However, it is not clear how we could compare the serialized representations corresponding to twc expressions and whether this mapping preserves semantic distances.\nRecursive neural networks (TREeNN) (Socher et al., 2012; 2013) have been successfully used in NLP with multiple applications. Socher et al. (2012) show that TREENNs can learn to compute the values of some simple propositional statements. EQNeT's SuBExpFoRcE may resemble recursive autoencoders (Socher et al., 2011) but differs in form and function, encoding the whole parent-children tuple to force a clustering behavior. In addition, when encoding each expression our architecture does not use a pooling layer but directly produces a single representation for the expression.\nMou et al. (2016) use tree convolutional neural networks to classify code into 106 student submissions tasks. Although their model learns intermediate representations of the student tasks, it is a way of learning task-specific features in the code, rather than of learning semantic representations of programs. Piech et al. (2015) also learn distributed matrix representations of programs from student submissions. However, to learn the representations, they use input and output program states and do not test over program equivalence. Additionally, these representations do not necessarily represent program equivalence, since they do not learn the representations over the exhaustive set of all possible input-output states. Allamanis et al. (2016) learn variable-sized representations of\nExpression a^(a^(a^(-c))) a^(a^(c=>(-c))) (a ^a) ^(c=>(-c)) tfidf c^((a^a)^(-a)) c=>(-((c^a)^a)) c=>-((c^a)^a) GRU a^(a^(c^(-c))) (a^a) ^(c=>(-c)) 1L-TREENN a^(a^(a^(-b))) a^(a^ (c=>(-b))) (a^a) ^(c=>(-b)) EQNET -(b=>(bVc)))^a\nvariables ignoring order. In contrast, GRU and TREeNN tend to confuse expressions with very similar symbolic representation differing in one or two deeply nested variables or operators. In contrast EqNeT tends to confuse fewer expressions (as we previously showed) and the confused expressions tend to be more syntactically diverse and semantically related.\nIn this work, we presented EQNeTs, a first step in learning continuous semantic representation. (SemVECs) of procedural knowledge. SEmVeCs have the potential of bridging continuous repre. sentations with symbolic representations, useful in multiple applications in artificial intelligence machine learning and programming languages..\nWe show that EQNeTs perform significantly better than state-of-the-art alternatives. But further improvements are needed, especially for more robust training of compositional models. In addition even for relatively small symbolic expressions, we have an exponential explosion of the semantic space to be represented. Fixed-sized SemVECs, like the ones used in EQNET, eventually limit the capacity that is available to represent procedural knowledge. In the future, to represent more complex procedures, variable-sized representations would seem to be required."}, {"section_index": "8", "section_name": "ACKNOWLEDGMENTS", "section_text": "This work was supported by Microsoft Research through its PhD Scholarship Programme and th Engineering and Physical Sciences Research Council [grant number EP/K024043/1]. We thank th University of Edinburgh Data Science EPSRC Centre for Doctoral Training for providing additiona computational resources\n6(b^c) (a^(b^c)) (a^b 1+6 b-b-a-c tV(6vc) 1 V aVb F(aV(Xd) aV(bVc)) +c -a+c\nFigure 4: A PCA visualization of some simple non-equivalent i booiean and polynomial expressions (black-square) and their negations (red-circle). The lines connect the negated expressions. source code snippets to summarize them with a short function-like name. This method aims to learn summarization features in code rather than to learn representations of symbolic expression equivalence\nMore closely related is the work of Zaremba & Sutskever (2014) who use a recursive neural network (TREeNN) to guide the tree search for more efficient mathematical identities, limited to homoge neous single-variable polynomial expressions. In contrast, EqNeTs consider at a much wider set of expressions, employ subexpression forcing to guide the learned SemVeCs to better represent equivalence, and do not use search when looking for equivalent expressions. Alemi et al. (2016 use RNNs and convolutional neural networks to detect features within mathematical expressions and speed the search for premise selection during automated theorem proving but do not explicitly account for semantic equivalence. In the future, SEmVeCs may find useful applications within this work.\nOur work is also related to recent work on neural network architectures that learn controllers/programs. Gruau et al., 1995; Graves et al., 2014; Joulin & Mikolov, 2015: Grefenstette et al., 2015; Dyer et al. 2015: Reed & de Freitas, 2015: Neelakantan et al., 2015: Kaiser & Sutskever, 2016). In contrast to this work, we do not aim to learn how to evaluate expressions or execute programs with neural. network architectures but to learn continuous semantic representations (SemVeCs) of expression. semantics irrespectively of how they are syntactically expressed or evaluated.."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Lukasz Kaiser and Ilya Sutskever. Neural GPUs learn algorithms. In ICLR, 2016\nJoao Pedro Neto, Hava T Siegelmann, and J Felix Costa. Symbolic processing in neural networks Ournal of tho.Rrazili 835870000\nScott Reed and Nando de Freitas. Neural programmer-interpreters. 2016.\nRichard Socher, Jeffrey Pennington, Eric H Huang, Andrew Y Ng, and Christopher D Manning\nWojciech Zaremba and Ilya Sutskever. Learning to execute. arXiv preprint arXiv:1410.4615, 2014\nAlex A Alemi, Francois Chollet, Geoffrey Irving, Christian Szegedy, and Josef Urban. DeepMath deep sequence models for premise selection. arXiv preprint arXiv:1606.04442, 2016.\nArvind Neelakantan, Quoc V Le, and Ilya Sutskever. Neural programmer: Inducing latent programs with gradient descent. 2015."}, {"section_index": "10", "section_name": "SYNTHETIC EXPRESSION DATASETS", "section_text": "Below are sample expressions within an equivalence class for the two types of datasets we consider\n(-a)^(-b) (-a^-c) v-b^a^c) v-c^b) (-a)^b^c -(-a) =>(-a)^b)) c(((-a) =>a) =>b) ((-b) v((-c) V a)) ((bv(-(-a))) v b) -((b (bV a)) c) ((a Vb)^c)^-a) (-a) ((a V b) a) -((-(bv(-a))) c (-((--b))=>a))^c (b=>b=>a))^-a) ((b V a) (-b)) c) (c^(c=>(-a)))^b -a)->6)->aa -((6 a) ^a)) c b^(-(b^(c=>a))) False (-a) ^(-b) v (^c) -a V b aa)^(c=>c) (a => (-c)) (a V b) a =>((b^(-c))v b) (-b)^(-(b=>a)) (a => (c b)) b -(-((bV a) =>b)) b^((aV a) a) b(a=> (bc)) (-a) -(b=>-a)) (-b) ^b) (a a) (bV a) (x=>(-a)) b V(-((-b) ^ a)) c^(-(a=>a))^c b ((-a) V (c b)) -((a => (a b))^ a) POLy8 b2 c2 -a-c c2 b-a-c+b) (cc)+b-b) (b.b).(cc) b-(c+(b+a)) (cc)-c)+c c.(c.(b.b)) a-((a+a)+c) ((b+c)-b)c (c.b).(b.c) a-a+a))-c cc-(a-a)) ((c.b).c).b b-b-a+c) c:c ((cc)b) b b:c b-c c c-((c-c)a) c-(b-b))b (a-(a+c))+b c-((a-a)c) (b-c-c))c a-c-a-b ((a-a)b)+c (b-b)+(bc) (b-c+c))+c c+a)-a c(b-c)+c) b-c-a)-a (a(c-c))+c (bc)+c-c) b-((a-a)+c)"}, {"section_index": "11", "section_name": "B DETAILED EVALUATION", "section_text": "Figure 5 presents a detailed evaluation for our k-NN metric for each dataset. Figure 6 shows the detailed evaluation when using models trained on simpler datasets but tested on more complex ones essentially evaluating the learned compositionality of the models. Figure 9 show how the performance varies across the datasets based on their characteristics. As expected as the number of variables increase, the performance worsens (Figure 9a) and expressions with more complex operators tenc to have worse performance (Figure 9b). In contrast, Figure 9c suggests no obvious correlation between performance and the entropy of the equivalence classes within the datasets. The results for UnsEenEQCLAss look very similar and are not plotted here.\nThe optimized hyperparameters are detailed in Table 4. All hyperparameters were optimized using he Spearmint (Snoek et al., 2012) Bayesian optimization package. The same range of values was used for all common model hyperparameters.\n.0 0.8 0.6 0.4 0.2 0.0 5 10 5 10 5 10 5 10 5 10 5 10 5 10 10 5 10 5 10 5 10 5 10 (a) SeEnEQCLAss evaluation using model trained on the respective training set 1.0 0.8 0.6 BO0 0.4 0.2 0.0 5 10 5 10 5 10 5 10 5 10 10 5 10 5 10 5 10 5 10 5 10 5 10\nFigure 5: Evaluation of score, (y axis) for x = 1, ..., 15. on the respective SeenEQCLAss and. UnsEENEQCLAss where each model has been trained on. The markers are shown every five ticks of the x-axis to make the graph more clear. TREENN refers to the model of Socher et al. (2012).\n0TA1Od 0.8 81008 A70 0.6 dWIS ANC 977008 0.4 10 8A70 0.2 0.0 5 10 5 10 5 10 5 10 5 10 5 10 5 10 5 10 5 10 5 10 5 10 (a) SeenEQCLAss evaluation using model trained on simpler datasets. Caption is \"model trained on'->\"Test dataset\" 1.0 0.8 870084 8A70d 1OO8 0 0.6 3NO 5 9A70d 77008 0.4 00 0.2 10.0 5 5 5 10 5 10 5 10 5 5 10 5 10 5 10 10 5 10 10 5 10 10 nlerdotoset\n(a) SeenEQCLAss evaluation using model trained on simpler datasets. Caption is \"model trained on\"->\"Te dataset\".\n1.0 OOEP-POEE 0.8 80L18000L8 B00L 8A7Od ONE 0.6 BO0L 0.4 8A10d C 0.2 0.0\n(b) Evaluation of compositionality. UnsEENEQCLAss evaluation using model trained on simpler datasets Caption is \"model trained on\"->\"Test dataset\"'..\nFigure 6: Evaluation of compositionality. Evaluation of scorex (y axis) for x = 1,..., 15. The. markers are shown every five ticks of the x-axis to make the graph more clear. TReeNN refers to the model of Socher et al. (2012)\n1.0 1.0 tf-idf TreeNN-1Layer tf-idf TreeNN-1Layer GRU TreeNN-2Layer A GRU 0 TreeNN-2Layer 0.8 0.8 StackRNN * * EqNet StackRNN ** EqNet reeeeson 0.6 Preeeson 0.6 0.4 0.4 0.2 0.2 0.0 0.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Recall Recall (a) SEENEQCLASS (b) UNSEENEOCLASS\nFigure 7: Precision-Recall curves averaged across datasets\n1.0 1.0 0.8 0.8 Rate Rate 0.6 0.6 Pooiree Poiere fnue 0.4 Tnue 0.4 tf-idf TreeNN-1Layer tf-idf TreeNN-1Layer 0.2 GRU TreeNN-2Layer 0.2 GRU TreeNN-2Layer StackRNN ** EqNet StackRNN * * EqNet 0.0 0.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 False Positive Rate. False Positive Rate. (a) SEENEQCLASS (b) UNSEENEQCLASS\nFigure 8: Receiver operating characteristic (ROC) curves averaged across datasets\nTable 4: Hyperparameters used in this work\nEQNET learning rate 10-2.1, rmsprop p = 0.88, momentum 0.88, minibatch size. 900, representation size D = 64, autoencoder size M = 8, autoencoder noise k = 0.61, gradient clipping 1.82, initial parameter standard devi- ation 10-2.05, dropout rate .11, hidden layer size 8, v = 4, curriculum. initial tree size 6.96, curriculum step per epoch 2.72, objective margin. m = 0.5 learning rate 10-3.5, rmsprop p = 0.6, momentum 0.01, minibatch size. 1-layer-TREENN 650, representation size D = 64, gradient clipping 3.6, initial parameter standard deviation 10-1.28, dropout 0.0, curriculum initial tree size 2.8,. curriculum step per epoch 2.4, objective margin m = 2.41 learning rate 10-3.5, rmsprop p = 0.9, momentum 0.95, minibatch size. 2-layer-TREENN 1000, representation size D = 64, gradient clipping 5, initial parameter standard deviation 10-4, dropout 0.0, hidden layer size 16, curriculum initial tree size 6.5, curriculum step per epoch 2.25, objective margin. m = 0.62 GRU learning rate 10-2.31, rmsprop p = 0.90, momentum 0.66, minibatch size 100, representation size D = 64, gradient clipping 0.87, token embedding size 128, initial parameter standard deviation 10-1, dropout rate 0.26 learning rate 10-2.9, rmsprop p = 0.99, momentum 0.85, minibatch size. StackRNN 500, representation size D = 64, gradient clipping 0.70, token embed- ding size 64, RNN parameter weights initialization standard deviation 10-4, embedding weight initialization standard deviation 10-3, dropout. 0.0. stack count 40\n1.1 1.0 oneVarPoly13 poly8 oneVarPoly10 0.9 largeSimpleBoolean5 simpldpwby5an5 0.8 JargeBoolean5 poly5 boolean10 0.7 boolean8 0.6 0.5 6 8 10 12 14 Equivalence Class Entropy (c) Entropy H vs. score1o for all datasets\nFigure 9: EQNET performance on SeENEQCLAss for various dataset characteristics\n(b) Performance vs. Operator Complexity"}] |
B1ElR4cgg | [{"section_index": "0", "section_name": "ADVERSARIALLY LEARNED INFERENCE", "section_text": "Vincent Dumoulin', Ishmael Belghazi1. Ben Poole\nWe introduce the adversarially learned inference (ALI) model, which jointly learns a generation network and an inference network using an adversarial process. The generation network maps samples from stochastic latent variables to the data space while the inference network maps training examples in data space to the space of latent variables. An adversarial game is cast between these two networks and a discriminative network is trained to distinguish between joint latent/data-space samples from the generative network and joint samples from the inference network We illustrate the ability of the model to learn mutually coherent inference and gen eration networks through the inspections of model samples and reconstructions anc confirm the usefulness of the learned representations by obtaining a performance competitive with state-of-the-art on the semi-supervised SVHN and CIFAR10 tasks."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Deep directed generative model has emerged as a powerful framework for modeling complex higl dimensional datasets. These models permit fast ancestral sampling, but are often challenging tc learn due to the complexities of inference. Recently, three classes of algorithms have emergec as effective for learning deep directed generative models: 1) techniques based on the Variationa Autoencoder (VAE) that aim to improve the quality and efficiency of inference by learning ar inference machine (Kingma & Welling2013]Rezende et al.2014), 2) techniques based on Generativ Adversarial Networks (GANs) that bypass inference altogether (Goodfellow et al.2014) and 3 autoregressive approaches (van den Oord et al.|2016b c a) that forego latent representations anc instead model the relationship between input variables directly. While all techniques are provably consistent given infinite capacity and data, in practice they learn very different kinds of generative models on typical datasets.\nVAE-based techniques learn an approximate inference mechanism that allows reuse for various. auxiliary tasks, such as semi-supervised learning or inpainting. They do however suffer from a well recognized issue of the maximum likelihood training paradigm when combined with a conditional. independence assumption on the output given the latent variables: they tend to distribute probability. mass diffusely over the data space (Theis et al.2015). The direct consequence of this is that image. samples from VAE-trained models tend to be blurry (Goodfellow et al.[2014] Larsen et al.2015 Autoregressive models produce outstanding samples but do so at the cost of slow sampling speed and. foregoing the learning of an abstract representation of the data. GAN-based approaches represent. a good compromise: they learn a generative model that produces higher-quality samples than the. best VAE techniques (Radford et al.|2 2015Larsen et al.[ 2015) without sacrificing sampling speed and also make use of a latent representation in the generation process. However, GANs lack an. efficient inference mechanism, which prevents them from reasoning about data at an abstract level. For instance, GANs don't allow the sort of neural photo manipulations showcased in (Brock et al.. 2016). Recently, efforts have aimed to bridge the gap between VAEs and GANs, to learn generative. models with higher-quality samples while learning an efficient inference network (Larsen et al."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "z~ q(z|x) Z ^ Gx(z) x ~ q(x x ~p(x|z)\nFigure 1: The adversarially learned inference (ALI) game.\nWith experiments on the Street View House Numbers (SVHN) dataset (Netzer et al.]2011), the. CIFAR-10 object recognition dataset (Krizhevsky & Hinton] 2009), the CelebA face dataset (Liu. et al.[2015) and a downsampled version of the ImageNet dataset (Russakovsky et al.[[2015), we show qualitatively that we maintain the high sample fidelity associated with the GAN framework, while. gaining the ability to perform efficient inference. We show that the learned representation is useful for auxiliary tasks by achieving results competitive with the state-of-the-art on the semi-supervised SVHN and CIFAR10 tasks.\nConsider the two following probability distributions over x and z\nthe encoder joint distribution q(x, z) = q(x )q(z x) the decoder joint distribution p(x, z) = p(z )p(x z)\nALI's objective is to match the two joint distributions. If this is achieved, then we are ensured that all marginals match and all conditional distributions also match. In particular, we are assured that the conditional q(z x ) matches the posterior p(zx)\nIn order to match the joint distributions, an adversarial game is played. Joint pairs (x, z) are drawn either from q(x, z) or p(x, z), and a discriminator network learns to discriminate between the two while the encoder and decoder networks are trained to fool the discriminator.\nThe value function describing the game is given by"}, {"section_index": "3", "section_name": "2015Lamb et al.]2016] Dosovitskiy & Brox2016). While this is certainly a promising research. direction, VAE-GAN hybrids tend to manifest a compromise of the strengths and weaknesses of both approaches.", "section_text": "In this paper, we propose a novel approach to integrate efficient inference within the GAN framework Our approach, called Adversarially Learned Inference (ALI), casts the learning of both an inference machine (or encoder) and a deep directed generative model (or decoder) in an GAN-like adversarial framework. A discriminator is trained to discriminate joint samples of the data and the corresponding latent variable from the encoder (or approximate posterior) from joint samples from the decoder while in opposition, the encoder and the decoder are trained together to fool the discriminator. Not only are we asking the discriminator to distinguish synthetic samples from real data, but we are requiring it to distinguish between two joint distributions over the data space and the latent variables.\nThese two distributions have marginals that are known to us: the encoder marginal q(x) is the. empirical data distribution and the decoder marginal p(z) is usually defined to be a simple, factorized distribution, such as the standard Normal distribution p(z) = N(o, I). As such, the generative. process between q(x, z) and p(x, z) is reversed..\nq(x)q(zx) log(D(x,z))dxdz p(z)p(x z) log(1 - D(x, z))dxdz.\nAlgorithm 1 The ALI training procedure\nz q(z|x = x(i)) i=1,...,M > Sample from the conditic x(j) ~p(x|z = z(j)), j=1,...,M + D(x(i),z(i)). i=1,...,M > Compute discriminator predic D(x(j),z(j)), j=1,...,M La -M 1og(p) - log(1- p) > Compute discriminator pq)) - M =1log(P Lq+ > Compute generator 0a0d-VeLd > Gradient update on discriminator net 0g0g-VegLg > Gradient update on generator netv until convergence\nAn attractive property of adversarial approaches is that they do not require that the conditional densities can be computed; they only require that they can be sampled from in a way that allows gradient backpropagation. In the case of ALI, this means that gradients should propagate from the discriminator network to the encoder and decoder networks..\nThis can be done using the the reparametrization trick (Kingma2013Bengio et al.. 2013b a). Instead of sampling directly from the desired distribution, the random variable is computed as a deterministic. transformation of some noise such that its distribution is the desired distribution. For instance. if q(z | x) = N((x), o2(x)I), one can draw samples by computing\nz = (x) +(x) O e, e ~ N(0,I)\nv=f(u,e)\nThe discriminator is trained to distinguish between samples from the encoder (x, ) ~ q(x, z) and samples from the decoder (x, z) ~ p(x, z). The generator is trained to fool the discriminator, i.e., to generate x, z pairs from q(x, z) or p(x, z) that are indistinguishable one from another. See[Figure 1 for a diagram of the adversarial game and|Algorithm 1|for an algorithmic description of the procedure\nALI bears close resemblance to GAN, but it differs from it in the two following ways\nIn recent work, Chen et al.(2016) introduce a model called InfoGAN which minimizes the mutual. information between a subset c of the latent code and x through the use of an auxiliary distribution\nnngproceau 0g, 0d initialize network parameters repeat x(1),..., x(M) ~ q(x) > Draw M samples from the dataset and the prior z(1),..., z(M) ~ p(z) z(i) ~q(z|x=x(i)), i=1,...,M > Sample from the conditionals x(j)~p(x|z=z(j)), j=1,..,M iD(x(i),(i))i=1,...,M > Compute discriminator predictions (jD(x(,z(j)j=1,...,M O LdK > Compute discriminator loss Lg + - 1og(1 - p) - M 1 1og(p) (J) > Compute generator loss 0d0d-VeLd > Gradient update on discriminator network 0g 0g-VegLg > Gradient update on generator networks until convergence\nIn such a setting, and under the assumption of an optimal discriminator, the generator minimizes the Jensen-Shannon divergence (Lin||1991) between q(x, z) and p(x, z). This can be shown using the same proof sketch as in the original GAN paper (Goodfellow et al.J2014).\nThe generator has two components: the encoder, G,(x), which maps data samples x to z-space, and the decoder Gx(z), which maps samples from the prior p(z) (a source of noise) to the input space. The discriminator is trained to distinguish between joint pairs (x, = Gx(x)) and (x = G,(z), z), as opposed to marginal samples x ~ q(x) and x ~ p(x).\nQ(c x). However, this does not correspond to full inference on z, as only the value for c is inferred Additionally, InfoGAN requires that Q(c x) is a tractable approximate posterior that can be sampled from and evaluated. ALI only requires that inference networks can be sampled from, allowing it to represent arbitrarily complex posterior distributions.\nAlternatively, one could decompose training into two phases. In the first phase, a GAN is trained normally. In the second phase, the GAN's decoder is frozen and an encoder is trained following the ALI procedure (i.e., a discriminator taking both x and z as input is introduced). We call this post-hoc learned inference. In this setting, the encoder and the decoder cannot interact together during training and the encoder must work with whatever the decoder has learned during GAN training. Post-hoc learned inference may be suboptimal if this interaction is beneficial to modeling the data distribution"}, {"section_index": "4", "section_name": "2.3 GENERATOR VALUE FUNCTION", "section_text": "As with GANs, when ALI's discriminator gets too far ahead, its generator may have a hard time minimizing the value function in Equation 1 If the discriminator's output is sigmoidal, then the gradient of the value function with respect to the discriminator's output vanishes to zero as the outpu saturates.\nAs a workaround. the generator is trained to maximize\nV'(D,G) = Eq(x)[log(1- D(x,Gz(x)))]+ Ep(z)[log(D(Gx(z),z))"}, {"section_index": "5", "section_name": "2.4 DISCRIMINATOR OPTIMALITY", "section_text": "Proposition 1. Given a fixed. rator G. the optimal discriminator is given by\nV(D,G) = Ex,z~q( (x,z)[log(D(x, z))] + Ex,z~p(x,z)[log(1 D(x, z))]\nProposition 2. Under an optimal discriminator D*, the generator minimizes the Jensen-Shanon divergence which attains its minimum if and only if q(x, z) = p(x, z).\nOne could learn the inverse mapping from GAN samples: this corresponds to learning an encoder to. reconstruct z, i.e. finding an encoder such that Ez~p(z)[||z - G(G(z)|/2] ~ 0. We are not aware of any work that reports results for this approach. This resembles the InfoGAN learning procedure but with a fixed generative model and a factorial Gaussian posterior with a fixed diagonal variance.\nThe adversarial game does not require an analytical expression for the joint distributions. This means we can introduce variable changes without having to know the explicit distribution over the new variable. For instance, sampling from p(z) could be done by sampling e ~ N(0, I) and passing it. through an arbitrary differentiable function z = f(e).\nHowever, gradient propagation into the encoder and decoder networks relies on the reparametrization rick, which means that ALI is not directly applicable to either applications with discrete data or to nodels with discrete latent variables..\n1L.Z D*(x,z) q(x,z)+p(x,z)\nThe result follows by the concavity of the log and the simplified Euler-Lagrange equation first order conditions on (x, z) -> D(x, z).\n(a) SVHN samples. (b) SVHN reconstructions\nFigure 3: Samples and reconstructions on the CelebA dataset. For the reconstructions, odd columns are original samples from the validation set and even columns are corresponding reconstructions..\n(a) CIFAR10 samples (b) CIFAR10 reconstructions.\nFigure 4: Samples and reconstructions on the CIFAR10 dataset. For the reconstructions, odd columns are original samples from the validation set and even columns are corresponding reconstructions\nFigure 2: Samples and reconstructions on the SVHN dataset. For the reconstructions, odd columns are original samples from the validation set and even columns are corresponding reconstructions (e.g. second column contains reconstructions of the first column's validation set samples).\n(a) CelebA samples (b) CelebA reconstructions\nProposition 3. Assuming optimal discriminator D and generator G. If the encoder Gx is determin istic, then Gx = G-1 and Gz = G-1 almost everywhere.\nSketch of proof. Consider the event Re = {x : |x - (Gx o Gz)(x))|l > e} for some positive e This set can be seen as a section of the (x, z) space over the elements z such that z = G(x). The generator being optimal, the probabilities of Re under p(x, z) and q(x, z) are equal. Now p(x | z) = ox-G(z), where & is the Dirac delta distribution. This is enough to show that there are no x satisfying the event Re and thus Gx = G-1 almost everywhere. By symmetry, the same argument can be applied to show that G = G1. The complete proof is given in (Donahue et al.||2016), in which the authors independently examine the same model structure under the name Bidirectional GAN (BiGAN)."}, {"section_index": "6", "section_name": "3 RELATED WORK", "section_text": "Other recent papers explore hybrid approaches to generative modeling. One such approach is to relax the probabilistic interpretation of the VAE model by replacing either the KL-divergence term or the reconstruction term with variants that have better properties. The adversarial autoencoder model (Makhzani et al.J2015) replaces the KL-divergence term with a discriminator that is trained tc distinguish between approximate posterior and prior samples, which provides a more flexible approach to matching the marginal q(z) and the prior. Other papers explore replacing the reconstruction term with either GANs or auxiliary networks.Larsen et al.(2015) collapse the decoder of a VAE and the generator of a GAN into one network in order to supplement the reconstruction loss with a learned similarity metric. Lamb et al.[(2016) use the hidden layers of a pre-trained classifier as auxiliary reconstruction losses to help the VAE focus on higher-level details when reconstructing.Dosovitskiy & Brox (2016) combine both ideas into a unified loss function.\nIndependent work byDonahue et al.[(2016) proposes the same model under the name Bidirectional. GAN (BiGAN), in which the authors emphasize the learned features' usefulness for auxiliary. supervised and semi-supervised tasks. The main difference in terms of experimental setting is that. they use a deterministic q(z x) network, whereas we use a stochastic network. In our experience,. this does not make a big difference when x is a deterministic function of z as the stochastic inference networks tend to become determinstic as training progresses. When using stochastic mappings from. z to x, the additional flexiblity of stochastic posteriors is critical.."}, {"section_index": "7", "section_name": "We applied ALI to four different datasets, namely CIFAR10 (Krizhevsky & Hinton2009), SVHN (Netzer et al.]2011), CelebA (Liu et al.]2015) and a center-cropped, 64 64 version of the ImageNet dataset (Russakovsky et al.2015)]", "section_text": "Transposed convolutions are used in Gx(z). This operation corresponds to the transpose of the matrix representation of a convolution, i.e., the gradient of the convolution with respect to its inputs. For more details about transposed convolutions and related operations, seeDumoulin & Visin|(2016); Shi et al.(2016); Odena et al.(2016)\nFor each dataset, samples are presented (Figures[2a] 3a 4a|and 5a). They exhibit the same image fidelity as samples from other adversarially-trained models.\nALI's approach is also reminiscent of the adversarial autoencoder model, which employs a GAN to. distinguish between samples from the approximate posterior distribution q(z x) and prior samples However, unlike adversarial autoencoders, no explicit reconstruction loss is being optimized in ALI,. and the discriminator receives joint pairs of samples (x, z) rather than marginal z samples..\n1 The code for all experiments can be found athttps: //github. com/IshmaelBelghazi/ALI. Readers can also consult the accompanying website at https : //ishmaelbelghazi. github. io/ALI\nWe observe that reconstructions are not always faithful reproductions of the inputs. They retain the same crispness and quality characteristic to adversarially-trained models, but oftentimes make mistakes in capturing exact object placement, color, style and (in extreme cases) object identity. The extent to which reconstructions deviate from the inputs varies between datasets: on CIFAR10, which arguably constitutes a more complex input distribution, the model exhibits less faithful reconstructions This leads us to believe that poor reconstructions are a sign of underfitting..\nThis failure mode represents an interesting departure from the bluriness characteristic to the typica VAE setup. We conjecture that in the underfitting regime, the latent variable representation learnec. by ALI is potentially more invariant to less interesting factors of variation in the input and do no devote model capacity to capturing these factors.\nAs a sanity check for overfitting, we look at latent space interpolations between validation set examples (Figure 6). We sample pairs of validation set examples x1 and x2 and project them into z1 and z2 by sampling from the encoder. We then linearly interpolate between z1 and z2 and pass the intermediary points through the decoder to plot the input-space interpolations.\nWe observe smooth transitions between pairs of examples, and intermediary images remain believable This is an indicator that ALI is not concentrating its probability mass exclusively around training examples, but rather has learned latent features that generalize well."}, {"section_index": "8", "section_name": "4.3 SEMI-SUPERVISED LEARNING", "section_text": "We first compare with GAN on SVHN by following the procedure outlined in |Radford et al.(2015 We train an L2-SVM on the learned representations of a model trained on SVHN. The last three. hidden layers of the encoder as well as its output are concatenated to form a 8960-dimensional feature vector. A 10,000 example held-out validation set is taken from the training set and is used for model selection. The SVM is trained on 1000 examples taken at random from the remainder of the training set. The test error rate is measured for 100 different SVMs trained on different random 1000-example training sets, and the average error rate is measured along with its standard deviation..\n(a) Tiny ImageNet samples (b) Tiny ImageNet reconstructions.\nFigure 5: Samples and reconstructions on the Tiny ImageNet dataset. For the reconstructions odd columns are original samples from the validation set and even columns are corresponding. reconstructions.\nWe also qualitatively evaluate the fit between the conditional distribution q(z | x) and the posterior distribution p(z | x) by sampling ~ q(z | x) and x ~ p(x | z = ) (Figures2b]3b]4b|and|5b This corresponds to reconstructing the input in a VAE setting. Note that the ALI training objective does not involve an explicit reconstruction loss..\nFigure 6: Latent space interpolations on the CelebA validation set. Left and right columns corre spond to the original pairs x1 and x2, and the columns in between correspond to the decoding of latent representations interpolated linearly from z1 to z2. Unlike other adversarial approaches like DCGAN (Radford et al.2015), ALI allows one to interpolate between actual data points.\nUsing ALI's inference network as opposed to the discriminator to extract features, we achieve a misclassification rate that is roughly 3.00 0.50% lower than reported in Radford et al.(2015 Table 1), which suggests that ALI's inference mechanism is beneficial to the semi-supervised. learning task.\nWe then investigate ALI's performance when label information is taken into account during training. We adapt the discriminative model proposed in Salimans et al.(2016). The discriminator takes x and. z as input and outputs a distribution over K + 1 classes, where K is the number of categories. When label information is available for q(x, z) samples, the discriminator is expected to predict the label. When no label information is available, the discriminator is expected to predict K + 1 for p(x, z). samples and k E {1,..., K} for q(x, z) samples.\nInterestingly, Salimans et al.(2016) found that they required an alternative training strategy for the generator where it tries to match first-order statistics in the discriminator's intermediate activations with respect to the data distribution (they refer to this as feature matching). We found that ALI did not require feature matching to obtain comparable results. We achieve results competitive with the state-of-the-art, as shown in Tables[1and 2] Table 2|shows that ALI offers a modest improvement over Salimans et al.(2016), more specifically for 1000 and 2000 labeled examples.\nTable 1: SVHN test set missclassification rate\nModel Misclassification rate VAE (M1 + M2) (Kingma et al., 2014 36.02 SWWAE with dropout Zhao et al., 2015 23.56 DCGAN + L2-SVM [Radford et al.. 2015 22.18 SDGM (Maaloe et al.,2016 16.61 GAN (feature matching) (Salimans et al., 2 2016 8.11 1.3 ALI (ours, L2-SVM) 19.14 0.50 ALI (ours, no feature matching) 7.42 0.65\nTable 2: CIFAR10 test set missclassification rate for semi-supervised learning using different numbers of trained labeled examples. For ALI, error bars correspond to 3 times the standard deviation.\nNumber of labeled examples 1000 2000 4000 8000 Model Misclassification rate Ladder network (Rasmus et al.,2015) 20.40 CatGAN (Springenberg, 2015) 19.58 GAN (feature matching) Salimans et al.,2016) 21.83 2.01 19.61 2.09 18.63 2.32 17.72 1.82 ALI (ours, no feature matching) 19.98 0.89 19.09 0.44 17.99 1.62 17.05 1.49"}, {"section_index": "9", "section_name": "4.4 CONDITIONAL GENERATION", "section_text": "We extend ALI to match a conditional distribution. Let y represent a fully observed conditioni variable. In this setting. the value function reads.\nWe apply the conditional version of ALI to CelebA using the dataset's 40 binary attributes. The. attributes are linearly embedded in the encoder, decoder and discriminator. We observe how a single element of the latent space z changes with respect to variations in the attributes vector y. Conditional samples are shown inFigure 7\n(a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (I) II III IV\nFigure 7: Conditional generation sequence. We sample a single fixed latent code z. Each row has a subset of attributes that are held constant across columns. The attributes are male, attractive, young for row I; male, attractive, older for row II; female, attractive, young for row III; female, attractive,. older for Row IV. Attributes are then varied uniformly over rows across all columns in the following. sequence: (b) black hair; (c) brown hair; (d) blond hair; (e) black hair, wavy hair; (f) blond hair,. bangs; (g) blond hair, receding hairline; (h) blond hair, balding; (i) black hair, smiling; () black hair,. smiling, mouth slightly open; (k) black hair, smiling, mouth slightly open, eyeglasses; (1) black hair,. smiling, mouth slightly open, eyeglasses, wearing hat..\nTo highlight the role of the inference network during learning, we performed an experiment on a toy. dataset for which q(x) is a 2D gaussian mixture with 25 mixture components laid out on a grid. The covariance matrices and centroids have been chosen such that the distribution exhibits lots of modes separated by large low-probability regions, which makes it a decently hard task despite the 2D nature. of the dataset.\nWe trained ALI and GAN on 100,000 q(x) samples. The decoder and discriminator architectures. are identical between ALI and GAN (except for the input of the discriminator, which receives the concatenation of x and z in the ALI case). Each model was trained 10 times using Adam (Kingma & Ba] 2014) with random learning rate and 1 values, and the weights were initialized by drawing from. a gaussian distribution with a random standard deviation.\nWe measured the extent to which the trained models covered all 25 modes by drawing 10,000 samples from their p(x) distribution and assigning each sample to a q(x) mixture component according to the mixture responsibilities. We defined a dropped mode as one that wasn't assigned to any sample Using this definition, we found that ALI models covered 13.4 5.8 modes on average (min: 8, max 25) while GAN models covered 10.4 9.2 modes on average (min: 1, max: 22).\nWe are still investigating the differences between ALI and GAN with respect to feature matching, but. we conjecture that the latent representation learned by ALI is better untangled with respect to the classification task and that it generalizes better..\nmin max V(D,G) = Eq(x) p(y)[log(D(x, Gz(x,y),y))] + Ep(z) p(y)[log(1- D(Gx(z,y), z,y))]\n(a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (I) 11 III IV\n(a) ALI (b) Inv. mapping from GAN samples (c) Post-hoc learned inference. (d) VAE 2 2 alersanness Reoennnnss 2\nFigure 8: Comparison of (a) ALI, (b) GAN with an encoder learned to reconstruct latent samples (c). GAN with an encoder learned through ALI, (d) variational autoencoder (VAE) on a 2D toy dataset The ALI model in (a) does a much better job of covering the latent space (second row) and producing. good samples than the two GAN models (b, c) augmented with an inference mechanism..\nWe then selected the best-covering ALI and GAN models, and the GAN model was augmented with an encoder using the learned inverse mapping and post-hoc learned inference procedures outlined in subsection 2.2[ The encoders learned for GAN inference have the same architecture as ALI's encoder We also trained a VAE with the same encoder-decoder architecture as ALI to outline the qualitative differences between ALI and VAE models.\nWe then compared each model's inference capabilities by reconstructing 10,000 held-out sample. from q(x). Figure 8|summarizes the experiment. We observe the following:\nThe ALI encoder models a marginal distribution q(z) that matches p(z) fairly well (row 2 column a). The learned representation does a decent job at clustering and organizing the different mixture components. The GAN generator (row 5, columns b-c) has more trouble reaching all the modes than the ALI generator (row 5, column a), even over 10 runs of hyperparameter search. Learning an inverse mapping from GAN samples does not work very well: the encoder has trouble covering the prior marginally and the way it clusters mixture components is not very well organized (row 2, column b). As discussed in|subsection 2.2] reconstructions suffer from the generator dropping modes. Learning inference post-hoc doesn't work as well as training the encoder and the decoder jointly. As had been hinted at in|subsection 2.2] it appears that adversarial training benefits\nIn summary, this experiment provides evidence that adversarial training benefits from learning ar. inference mechanism jointly with the decoder. Furthermore, it shows that our proposed approach for. learning inference in an adversarial setting is superior to the other approaches investigated"}, {"section_index": "10", "section_name": "5 CONCLUSION", "section_text": "We introduced the adversarially learned inference (ALI) model, which jointly learns a generatior. network and an inference network using an adversarial process. The model learns mutually coheren. inference and generation networks, as exhibited by its reconstructions. The induced latent variable mapping is shown to be useful, achieving results competitive with the state-of-the-art on the semi. supervised SVHN and CIFAR10 tasks."}, {"section_index": "11", "section_name": "ACKNOWLEDGMENTS", "section_text": "The authors would like to acknowledge the support of the following agencies for research funding. and computing support: NSERC, Calcul Quebec, Compute Canada. We would also like to thank the. developers of Theano (Bergstra et al.]2010] Bastien et al.]2012] Theano Development Team,2016) Blocks and Fuel (van Merrienboer et al.|2015), which were used extensively for the paper. Finally we would like to thank Yoshua Bengio, David Warde-Farley, Yaroslav Ganin and Laurent Dinh for their valuable feedback."}, {"section_index": "12", "section_name": "REFERENCES", "section_text": "Frederic Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian Goodfellow, Arnaud Bergeron Nicolas Bouchard, David Warde-Farley, and Yoshua Bengio. Theano: new features and speed improvements. arXiv preprint arXiv:1211.5590. 2012\nYoshua Bengio, Nicholas Leonard, and Aaron Courville. Estimating or propagating gradients througl. stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432. 2013a\nYoshua Bengio, Eric Thibodeau-Laufer, Guillaume Alain, and Jason Yosinski. Deep generative stochastic networks trainable by backprop. arXiv preprint arXiv:1306.1091, 2013b\nXi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan Interpretable representation learning by information maximizing generative adversarial nets. In Advances in Neural Information Processing Systems, pp. 2172-2180, 2016.\nJeff Donahue, Philipp Krahenbuhl, and Trevor Darrell. Adversarial feature learning. arXiv preprint arXiv:1605.09782, 2016.\nfrom learning inference at training time in terms of mode coverage. This also negatively impacts how the latent space is organized (row 2, column c). However, it appears to be better at matching q(z) and p(z) than when inference is learned through inverse mapping from GAN samples. Due to the nature of the loss function being optimized, the VAE model covers all modes easily (row 5, column d) and excels at reconstructing data samples (row 3, column d) However, they have a much more pronounced tendency to smear out their probability density (row 5, column d) and leave \"holes\"' in q(z) (row 2, column d). Note however that recent approaches such as Inverse Autoregressive Flow (Kingma et al.2016) may be used to improve on this, at the cost of a more complex mathematical framework.\nJames Bergstra, Olivier Breuleux, Frederic Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: a cpu and gpu math expression compiler. In Proceedings of the Python for scientific computing conference (SciPy) volume 4, pp. 3. Austin, TX, 2010.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprini arXiv:1412.6980, 2014.\nDiederik P Kingma, Tim Salimans, and Max Welling. Improving variational inference with inverse autoregressive flow. arXiv preprint arXiv:1606.04934, 2016\nAlex Lamb, Vincent Dumoulin, and Aaron Courville. Discriminative regularization for generative models. arXiv preprint arXiv:1602.03220, 2016\nAnders Boesen Lindbo Larsen, Soren Kaae Sonderby, and Ole Winther. Autoencoding beyond pixel using a learned similarity metric. arXiv preprint arXiv:1512.09300, 2015.\nJianhua Lin. Divergence measures based on the shannon entropy. Information Theory, IEEE Transactions on, 37(1):145-151, 1991.\nLars Maalge, Casper Kaae Sonderby, Soren Kaae Sonderby, and Ole Winther. Auxiliary deep generative models. arXiv preprint arXiv:1602.05473, 2016.\nAugustus Odena, Vincent Dumoulin, and Chris Olah. Deconvolution and checkerboard artifacts http://distill.pub/2016/deconv-checkerboard/, 2016.\nAlec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.\nAntti Rasmus, Harri Valpola, Mikko Honkala, Mathias Berglund, and Tapani Raiko. Semi-supervised learning with ladder network. In Advances in Neural Information Processing Systems, 2015, 2015.\nDanilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation anc approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014\nIan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair.. Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pp. 2672-2680, 2014\nan J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxout networks. arXiv preprint arXiv:1302.4389. 2013\nDiederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. In Advances in Neural Information Processing Systems, pp. 3581-3589, 2014.\nAlex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images, 2009\nYuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. In NIPS workshop on deep learning. and unsupervised feature learning, volume 2011, pp. 4. Granada, Spain, 2011.\nTim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen Improved techniques for training gans. arXiv preprint arXiv:1606.03498, 2016\nWenzhe Shi, Jose Caballero, Lucas Theis, Ferenc Huszar, Andrew Aitken, Christian Ledig, and Zehan Wang. Is the deconvolution layer the same as a convolutional layer? arXiv preprint arXiv:1609.07009, 2016.\nJost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generative adversarial networks. arXiv preprint arXiv:1511.06390, 2015.\nAaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves Nal Kalchbrenner, Andrew W. Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 2016a.\nJunbo Zhao. Michael Mathieu. Ross Goroshin. and Yann Lecun. Stacked what-where auto-encoders arXiv preprint arXiv:1506.02351, 2015.\nTheano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv preprint arXiv:1605.02688, 2016\nBart van Merrienboer, Dzmitry Bahdanau, Vincent Dumoulin, Dmitriy Serdyuk, David Warde-Farley. Jan Chorowski, and Yoshua Bengio. Blocks and fuel: Frameworks for deep learning. arXiv preprint arXiv:1506.00619, 2015.\nOperation Kernel Strides Feature maps BN? Dropout Nonlinearity Gz(x) - 3 32 32 input Convolution 5 x 5 1 x 1 32 V 0.0 Leaky ReLU Convolution 4 x 4 2 x 2 64 0.0 Leaky ReLU Convolution 4 x 4 1 x 1 128 V 0.0 Leaky ReLU Convolution 4 x 4 2 x 2 256 0.0 Leaky ReLU 512 Convolution 4 x 4 1 x 1 0.0 Leaky ReLU Convolution 1 x 1 1 x 1 512 V 0.0 Leaky ReLU Convolution 1 x 1 1 x 1 128 x 0.0 Linear Gx(z) - 64 1 x 1 input Transposed convolution 4 x 4 1 x 1 256 V 0.0 Leaky ReLU Transposed convolution 4 x 4 2 x 2 128 V 0.0 Leaky ReLU Transposed convolution 4 x 4 1 x 1 64 V 0.0 Leaky ReLU Transposed convolution 4 x 4 2 x 2 32 V 0.0 Leaky ReLU Transposed convolution 5 5 1 x 1 32 V 0.0 Leaky ReLU Convolution 1 x 1 1 x 1 32 V 0.0 Leaky ReLU Convolution 1 x 1 1 x 1 3 x 0.0 Sigmoid D(x) - 3 32 32 input Convolution 5 5 1 x 1 32 x 0.2 Maxout Convolution 4 x 4 2 x 2 64 0.5 Maxout x Convolution 4 x 4 1 x 1 128 x 0.5 Maxout Convolution 4 x 4 2 x 2 256 x 0.5 Maxout Convolution 4 x 4 1 x 1 512 x 0.5 Maxout D(z) - 64 x 1 x 1 input Convolution 1 x 1 1 x 1 512 0.2 Maxout Convolution 1 x 1 1 x 1 512 X 0.5 Maxout D(x, z) - 1024 x 1 x 1 input Concatenate D(x) and D(z) along the channel axis Convolution 1 x 1 1 x 1 1024 x 0.5 Maxout Convolution 1 x1 1 x 1 1024 x 0.5 Maxout Convolution 1 x1 1 x 1 1 X 0.5 Sigmoid Optimizer Adam ( = 10-4, 1 = 0.5, 2 = 10-3) Batch size100 Epochs 6475 Leaky ReLU slope, maxout pieces 0.1, 2 Weight, bias initialization Isotropic gaussian ( = 0, = 0.01), Constant(0)\nTable 4: SVHN model hyperparameters (unsupervised)\nOperation Kernel Strides Feature maps BN? Dropout Nonlinearit Gz(x) - 3 32 32 input Convolution 5 x 5 1 x 1 32 V 0.0 Leaky ReL Convolution 4 x 4 2 x 2 64 V 0.0 Leaky ReL Convolution 4 x 4 1 x 1 128 V 0.0 Leaky ReL Convolution 4 x 4 2 x 2 256 0.0 Leaky ReL Convolution 4 x 4 1 x 1 512 0.0 Leaky ReL Convolution 1 x 1 512 V 0.0 Leaky ReLl 1 x 1 Convolution 1 x 1 1 x 1 512 X 0.0 Linear Gx(z) - 256 1 1 input Transposed convolution 4 x 4 1 x 1 256 V 0.0 Leaky ReL Transposed convolution 4 x 4 2 x 2 128 V 0.0 Leaky ReL Transposed convolution 4 x 4 1 x 1 64 0.0 Leaky ReL 4 x 4 2 x 2 32 V 0.0 Leaky ReLl Transposed convolution Transposed convolution 5 x 5 1 x 1 32 V 0.0 Leaky ReL Convolution 1 x 1 1 x 1 32 V 0.0 Leaky ReL Convolution 1 x 1 1 x 1 3 x 0.0 Sigmoid D(x) - 3 32 32 input Convolution 5 x 5 1 x 1 32 x 0.2 Leaky ReL Convolution 4 x 4 2 x 2 64 V 0.2 Leaky ReL Convolution 4 x 4 1 x 1 128 V 0.2 Leaky ReL Convolution 4 x 4 2 x 2 256 0.2 Leaky ReL Convolution 4 x 4 1 x 1 512 V 0.2 Leaky ReL D(z) - 256 1 x 1 input Convolution 1 x 1 1 x 1 512 0.2 x Leaky ReL Convolution 1 x 1 1 x 1 512 x 0.2 Leaky ReL )(x, z) - 1024 x 1 1 input Concatenate D(x) and D(z) along the channel axis Convolution 1x1 1 x 1 1024 0.2 Leaky ReL Convolution 1 x 1 1 x 1 1024 x 0.2 Leaky ReL Convolution 1x1 1 x 1 1 0.2 Sigmoid Optimizer Adam ( = 10-4, 1 = 0.5, 2 = 10-3) Batch size100 Epochs 100 Leaky ReLU slope0.01 Weight, bias initialization Isotropic gaussian ( = 0, = 0.01), Constant(0)\nOperation Kernel Strides Feature maps BN? Dropout Nonlinearity Gz(x) - 3 64 64 input Convolution 2 x 2 1 x 1 64 0.0 Leaky ReLU Convolution 7 x 7 2 x 2 128 0.0 Leaky ReLU V Convolution 5 x 5 2 x 2 256 0.0 Leaky ReLU Convolution 7 x 7 2 x 2 256 0.0 Leaky ReLU Convolution 4 x 4 1 x 1 512 V 0.0 Leaky ReLU Convolution 1 x 1 1 x 1 512 x 0.0 Linear Gx(z) - 512 1 1 input Transposed convolution 4 x 4 1 x 1 512 0.0 Leaky ReLU V Transposed convolution 7 x 7 2 x 2 256 0.0 Leaky ReLU V Transposed convolution 5 5 2 x 2 256 0.0 Leaky ReLU V Transposed convolution 7 x 7 2 x 2 128 V 0.0 Leaky ReLU Transposed convolution 2 x 2 1 x 1 64 0.0 Leaky ReLU Convolution 1 x 1 1 x 1 3 x 0.0 Sigmoid D(x) - 3 64 64 input Convolution 2 x 2 1 x 1 64 V 0.0 Leaky ReLU Convolution 7 x 7 2 x 2 128 0.0 Leaky ReLU Convolution 5 5 2 x 2 256 V 0.0 Leaky ReLU Convolution 7 x 7 2 x 2 256 V 0.0 Leaky ReLU Convolution 4 x 4 1 x 1 512 V 0.0 Leaky ReLU D(z) - 512 1 x 1 input Convolution 1 x 1 1 x 1 1024 x 0.2 Leaky ReLU Convolution 1 x 1 1 x 1 1024 x 0.2 Leaky ReLU D(x, z) - 1024 1 x 1 input Concatenate D(x) and D(z) along the channel axis Convolution 1 x 1 1 x 1 2048 x 0.2 Leaky ReLU Convolution 1 x 1 1 x 1 2048 0.2 Leaky ReLU Convolution 1 x 1 1 x 1 1 x 0.2 Sigmoid Optimizer Adam ( = 10-4, 1 = 0.5) Batch size 100 Epochs 123 Leaky ReLU slope 0.02\nTable 5: CelebA model hyperparameters (unsupervised)\nTable 6: Tiny ImageNet model hyperparameters (unsupervised)\nOperation Kernel Strides Feature maps BN? Dropout Nonlinearity Gz(x) - 3 64 64 input Convolution 4 x 4 2 x 2 64 0.0 Leaky ReLU Convolution 4 x 4 1 x 1 64 V 0.0 Leaky ReLU Convolution 4 x 4 2 x 2 128 V 0.0 Leaky ReLU Convolution 4 x 4 1 x 1 128 V 0.0 Leaky ReLU Convolution 4 x 4 2 x 2 256 V 0.0 Leaky ReLU Convolution 4 x 4 1 x 1 256 V 0.0 Leaky ReLU Convolution 1 x 1 1 x 1 2048 V 0.0 Leaky ReLU Convolution 1 x 1 1 x 1 2048 V 0.0 Leaky ReLU Convolution 1 x 1 1 x 1 512 x 0.0 Linear Gx(z) - 256 1 1 input Convolution 1 x 1 1 x 1 2048 V 0.0 Leaky ReLU Convolution 1 x 1 1 x 1 256 V 0.0 Leaky ReLU Transposed convolution 4 x 4 1 x 1 256 V 0.0 Leaky ReLU Transposed convolution 4 x 4 2 x 2 128 V 0.0 Leaky ReLU Transposed convolution 4 x 4 1 x 1 128 0.0 Leaky ReLU 4 x 4 2 x 2 64 V Transposed convolution 0.0 Leaky ReLU Transposed convolution 4 x 4 1 x 1 64 V 0.0 Leaky ReLU Transposed convolution 4 x 4 2 x 2 64 V 0.0 Leaky ReLU Convolution 1 x 1 1 x 1 3 x 0.0 Sigmoid D(x) - 3 64 64 input Convolution 4 x 4 2 x 2 64 x 0.2 Leaky ReLU Convolution 4 x 4 1 x 1 64 V 0.2 Leaky ReLU Convolution 4 x 4 2 x 2 128 V 0.2 Leaky ReLU Convolution 4 x 4 1 x 1 128 V 0.2 Leaky ReLU Convolution 4 x 4 2 x 2 256 V 0.2 Leaky ReLU Convolution 1 x 1 256 4 x 4 0.2 Leaky ReLU D(z) - 256 1 x 1 input Convolution 1x 1 1 x 1 2048 x 0.2 Leaky ReLU Convolution 1 x 1 1 x 1 2048 x 0.2 Leaky ReLU D(x, z) - 2304 1 x 1 input Concatenate D(x) and D(z) along the channel axis Convolution 1 x 1 1 x 1 4096 x 0.2 Leaky ReLU Convolution 1 x 1 1 x 1 4096 x 0.2 Leaky ReLU Convolution 1 x 1 1 x 1 1 x 0.2 Sigmoid Optimizer Adam ( = 10-4, 1 = 0.5, 2 = 10-3) Batch size128 Epochs 125 Leaky ReLU slope0.01 Weight, bias initialization Isotropic gaussian ( = 0, = 0.01), Constant(0)\nZelda's description Zach's description Mr. Discriminator Xavier's painting Xena's depiction\nFigure 9: A Circle of Infinite Painters' view of the ALI game\nThe Circle of Infinite Painters is a very prolific artistic group. Very little is known about the Circle. but what we do know is that it is composed of two very brilliant artists. It has produced new painting almost daily for more than twenty years, each one more beautiful than the others. Not only are the. paintings exquisite, but their title and description is by itself a literary masterpiece..\nHowever, some scholars believe that things might not be as they appear: certain discrepancies in. the Circle's body of work hints at the Circle being composed of more than one artistic duo. This is. what Joseph Discriminator, art critique and world expert on the Circle, believes. He's recently been working intensively on the subject. Without knowing it, he's right: the Circle is not one, but twe artistic duos.\nXavier and Zach Prior form the creative component of the group. Xavier is a painter and can, in one hour and starting from nothing, produce a painting that would make any great painter jealous Impossible however for him to explain what he's done: he works by intuition alone. Zach is an author and his literary talent equals Xavier's artistic talent. His verb is such that the scenes he describes could just as well be real.\nBy themselves, the Prior brothers cannot collaborate: Xavier can't paint anything from a description and Zach is bored to death with the idea of describing anything that does not come out of his head This is why the Prior brothers depend on the Conditional sisters so much..\nAs such, the four members of the Circle work in pairs. What Xavier paints, Zelda describes, and what Zach describes, Xena paints. They all work together to fulfill the same vision of a unified Circle of Infinite Painters, a whole greater than the sum of its parts.\nWill the Circle reach this ideal, or will it be unmasked by Mr. Discriminator?\nZelda Conditional has an innate descriptive talent: she can examine a painting and describe it so well. that the original would seem like an imitation. Xena Conditional has a technical mastery of painting that allows her to recreate everything that's described to her in the most minute details. However their creativity is inversely proportional to their talent: by themselves, they cannot produce anything. of interest.\nThis is why Joseph Discriminator's observations bother them so much. Secretly, the Circle put Mr Discriminator under surveillance. Whatever new observation he's made, they know right away and work on attenuating the differences to maintain the illusion of a Circle of Infinite Painters made of a single artistic duo."}] |
HkNKFiGex | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Editing photos typically involves some form of manipulating individual pixels, and achieving desirabl results often requires significant user expertise. Given a sufficiently powerful image model, howeve. a user could quickly make large, photorealistic changes with ease by instead interacting with th model's controls. Two recent advances, the Variational Autoencoder (VAE)(Kingma & Welling|2014 and Generative Adversarial Network (GAN)(Goodfellow et al.2014), have shown great promise for use in modeling the complex, high-dimensional distributions of natural images, but significan challenges remain before these models can be used as general-purpose image editors.\nVAEs are probabilistic graphical models that learn to maximize a variational lower bound on the. likelihood of the data by projecting into a learned latent space, then reconstructing samples from that. space. GANs learn a generative model by training one network, the \"discriminator,\" to distinguish between real and generated data, while simultaneously training a second network, the \"generator,\" to transform a noise vector into samples which the discriminator cannot distinguish from real data. Both. approaches can be used to generate and interpolate between images by operating in a low-dimensional. learned latent space, but each comes with its own set of benefits and drawbacks..\nBy contrast, GANs have unstable and often oscillatory training dynamics, but produce images with sharp, photorealistic features. Basic GANs lack an inference mechanism, though techniques to train an inference network (Dumoulin et al. 2016) (Donahue et al.2016) have recently been developed as well as a hybridization that uses the VAE's inference network (Larsen et al.[2015).\nTwo key issues arise when attempting to use a latent-variable generative model to manipulate natural images. First, producing acceptable edits requires that the model be able to achieve close-to-exact reconstructions by inferring latents, or else the model's output will not match the original image This simultaneously necessitates an inference mechanism (or inference-by-optimization) and careful"}, {"section_index": "1", "section_name": "NEURAL PHOTO EDITING WITH INTROSPECTIVE AD VERSARIAL NETWORKS", "section_text": "Nick Weston Renishaw plc Research Ave, Nortl Edinburgh, UK NickWes\nNick.Weston@renishaw.com"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "VAEs have stable training dynamics, but tend to produce images that discard high-frequency details. when trained using maximum likelihood. Using the intermediate activations of a pre-trained discrim. inative neural network as features for comparing reconstructions to originals (Lamb et al.2016) mollifies this effect, but requires labels in order to train the discriminative network in a supervised fashion.\n7Neural Photo Editor Neural Photo Editor 7Neural Photo Editor Sample Reset InferCol SampleReset Infer Col SampleReset InferCol\nFigure 1: The Neural Photo Editor. The original image is center. The red and blue tiles are visualizations of the latent space, and can be directly manipulated as well.\ndesign of the model architecture, as there is a tradeoff between reconstruction accuracy and learnec feature quality that varies with the size of the information bottleneck\nSecond, achieving a specific desired edit requires that the user be able to manipulate the model's latent variables in an interpretable way. Typically, this would require that the model's latent space be augmented during training and testing with a set of labeled attributes, such that interpolating along a latent such as \"not smiling/smiling\" produces a specific change. In the fully unsupervised setting. however, such semantically meaningful output features are generally controlled by an entangled se1. of latents which cannot be directly manipulated..\nIn this paper, we present the Neural Photo Editor, an interface that handles both of these issues enabling a user to make large, coherent changes to the output of unsupervised generative models by indirectly manipulating the latent vector with a \"contextual paintbrush.\" By applying a simple interpolating mask, we enable this same exploration for existing photos despite reconstruction errors\nComplementary to the Neural Photo Editor, we develop techniques to improve on common design tradeoffs in generative models. Our model, the Introspective Adversarial Network (IAN), is a hybridization of the VAE and GAN that leverages the power of the adversarial objective while maintaining the VAE's efficient inference mechanism, improving upon previous VAE/GAN hybrids both in parametric efficiency and output quality. We employ a novel convolutional block based on dilated convolutions (Yu & Koltun, 2016) to efficiently increase the network's receptive field, and Orthogonal Regularization, a novel weight regularizer.\nWe demonstrate the qualitative sampling, reconstructing, and interpolating ability of the IAN on CelebA (Liu et al.]2015), SVHN (Netzer et al.]2011), CIFAR-10 (Krizhevsky & Hinton]2009), and Imagenet (Russakovsky et al.|2015), and quantitatively demonstrate its inference capabilities with competitive performance on the semi-supervised SVHN classification task. Further quantita tive experiments on CIFAR-100 (Krizhevsky & Hinton 2009) verify the generality of our dilated. convolution blocks and Orthogonal Regularization..\nWe present an interface, shown in Figure|1] that turns a coarse user input into a refined, photorealistic image edit by indirectly manipulating the latent space with a \"contextual paintbrush.\" The key idea is simple: a user selects a paint brush size and color (as with a typical image editor) and paints on the output image. Instead of changing individual pixels, the interface backpropagates the difference between the local image patch and the requested color, and takes a gradient descent step in the. latent space to minimize that difference. This step results in globally coherent changes that are semantically meaningful in the context of the requested color change. Given an output image X and.\nFigure 2: Visualizing the interpolation mask. Top, left to right: Reconstruction, reconstruction error original image. Bottom: Modified reconstruction, , output.\ndXus - X|2, evaluated at the curren a user requested color Xuser, the change in latent values is - dZ paintbrush location each time a user requests an edit.\nThis technique enables exploration of samples generated by the network, but fails when applied. directly to existing photos, as it relies on the manipulated image being completely controlled by the latent variables, and reconstructions are usually imperfect. We circumvent this issue by introducing a. simple masking technique that transfers edits from a reconstruction back to the original image..\nWe take the output image to be a sum of the reconstruction, and a masked combination of the requested pixel-wise changes and the reconstruction error:\nY=X+M+(1-M)(X-X)\nWhere X is the original image, X is the model's reconstruction of X, and is the difference between the modified reconstruction and X. The mask M is the channel-wise mean of the absolute value of , smoothed with a Gaussian filter q and truncated pointwise to be between O and 1:\nThe mask is designed to allow changes to the reconstruction to show through based on their magnitude This relaxes the accuracy constraints by requiring that the reconstruction be feature-aligned rather than pixel-perfect, as only modifications to the reconstruction are applied to the original image. As long as the reconstruction is close enough and interpolations are smooth and plausible, the system will successfully transfer edits.\nFor example, if a user has an image of a person with light skin, dark hair, and a widow's peak, by painting a dark color on the forehead, the system will automatically add hair in the requested area Similarly, if a user has a photo of a person with a closed-mouth smile, the user can produce a toothy grin by painting bright white over the target's mouth..\nM = min(g(),1)\nA visualization of the masking technique is shown in Figure[2] This method adds minimal computa tional cost to the underlying latent space exploration and produces convincing changes of features including hair color and style, skin tone, and facial expression. A video of the interface in action is available online\nE D(G(Z)) Z~ N(0,I) 1Z D G(E(X)) G D(G(E(X))) X D(X) f(G(E(X))) f(X)\nFigure 3: The Introspective Adversarial Network (IAN)\nComplementary to the Neural Photo Editor, we introduce the Introspective Adversarial Network. (IAN), a novel hybridization of the VAE and GAN motivated by the need for an image model with photorealistic outputs that achieves high-quality reconstructions without loss of representational power. There is typically a design tradeoff between these two goals related to the size of the latent space: a higher-dimensional latent space (i.e. a wider representational bottleneck) tends to learn less descriptive features, but produces higher quality reconstructions..\nWe thus seek techniques to improve the capacity of the latent space without increasing its dimension ality. Similar to VAE/GAN (Larsen et al.] 2015), we use the decoder network of the autoencoder as the generator network of the GAN, but instead of training a separate discriminator network, we combine the encoder and discriminator into a single network. Central to the IAN is the idea that features learned by a discriminatively trained network tend to be more expressive those learned by an encoder network trained via maximum likelihood (i.e. more useful on semi-supervised tasks), and thus better suited for inference. As the Neural Photo Editor relies on high-quality reconstructions. the inference capacity of the underlying model is critical. Accordingly, we use the discriminator of the GAN, D, as a feature extractor for an inference subnetwork, E, which is implemented as a fully-connected layer on top of the final convolutional layer of the discriminator. We infer latent values Z ~ E(X) = q(Z|X) for reconstruction and sample random values Z ~ p(Z) from a standard normal for random image generation using the generator network, G.\nIncluding the VAE's KL divergence between the inferred latents E(X) and the prior p(Z), the loss function for the generator and encoder network is thus:\nLE.G. DKL(E(X)||p(Z) Aado,\nSimilar to VAE/GAN and DeePSiM (Dosovitskiy & Brox][2016), we use three distinct loss functions\nimg, the L1 pixel-wise reconstruction loss, which we prefer to the L2 reconstruction loss. for its higher average gradient.. feature, the feature-wise reconstruction loss, evaluated as the L2 difference between the. original and reconstruction in the space of the hidden layers of the discriminator.. Ladv, the ternary adversarial loss, a modification of the adversarial loss that forces the discriminator to label a sample as real, generated, or reconstructed (as opposed to a binary. real vs. generated label).\nWe compare reconstructions using the intermediate activations, f(G(E(X))), of all convolutional layers of the discriminator, mirroring the perceptual losses of Discriminative Regularization (Lamb et al.]2016), VAE/GAN (Larsen et al.] 2015), and DeepSiM (Dosovitskiy & Brox2016). We note that Feature Matching (Salimans et al.|2016) is designed to operate in a similar fashion, but without the guidance of an inference mechanism to match latent values Z to particular values of f(G(Z)) We find that using this loss to complement the pixel-wise difference results in sharper reconstructions that better preserve higher frequency features and edges."}, {"section_index": "3", "section_name": "3.2 TERNARY ADVERSARIAL LOSS", "section_text": "The standard GAN discriminator network is trained using an implicit label source (real vs fake):. noting the success of augmenting the discriminator's objective with supervised labels (Odena et al. 2016), we seek additional sources of implicit labels, in the hopes of achieving similar improvements.. The ternary loss provides an additional source of supervision to the discriminator by asking it to. determine if a sample is real, generated, or a reconstruction, while the generator's goal is still to. have the discriminator assign a high \"real\" probability to both samples and reconstructions. We thus modify the discriminator to have three output units with a softmax nonlinearity, and train it to. minimize the categorical cross-entropy:\nLDadv -log(Drea1(X) cted(G(E(X)))\nWhere each D term in Equation 4jindicates the discriminator output unit assigned to each label class The generator is trained to produce outputs that maximize the probability of the label \"real\" being assigned by the discriminator by minimizing LGadv:\nLGadv = -log(Dreat(G(Z))) - log(Dreal(G(E(X)))\nWe posit that this loss helps maintain the balance of power early in training by preventing the discriminator from learning a small subset of features (e.g. artifacts in the generator's output) that distinguish real and generated samples, reducing the range of useful features the generator can learn from the discriminator. We also find that this loss leads to higher sample quality, perhaps because the additional source of supervision leads to the discriminator ultimately learning a richer feature space"}, {"section_index": "4", "section_name": "3.3 ARCHITECTURE", "section_text": "Our model has the same basic structure as DCGAN (Radford et al.l2015), augmented with Multiscal Dilated Convolution (MDC) blocks in the generator, and Minibatch Discrimination (Salimans et al 2016) in the discriminator. As in (Radford et al.2015), we use Batch Normalization (Ioffe & Szegedy 2015) and Adam (Kingma & Ba2014) in both networks. All of our code is publicly available\nWe propose a novel Inception-style (Szegedy et al.]2016) convolutional block motivated by the ideas. that image features naturally occur at multiple scales, that a network's expressivity is proportional to. the range of functions it can represent divided by its total number of parameters, and by the desire to efficiently expand a network's receptive field. The Multiscale Dilated Convolution (MDC) block. applies a single FxF filter at multiple dilation factors, then performs a weighted elementwise sum.\n2https://github.com/ajbrock/Neural-Photo-Editor\nWhere the A terms weight the relative importance of each loss. We set Aimg to 3 and leave the other. terms at 1. The discriminator is updated solely using the ternary adversarial loss. During each training step, the generator produces reconstructions G(E(X)) (using the standard VAE reparameterization trick) from data X and random samples G(Z), while the discriminator observes X as well as the reconstructions and random samples, and both networks are simultaneously updated.\n3x3, W 5x5,Wd2 7x7,Wd3 3+2S,WdS ? k1 k2 K3 ks Sum (a) (b)\nFigure 4: (a) Multiscale Dilated Convolution Block. (b) Visualizing a 3d3 MDC filter composition\nof each dilated filter's output, allowing the network to simultaneously learn a set of features and the relevant scales at which those features occur with a minimal increase in parameters. This also rapidly expands the network's receptive field without requiring an increase in depth or the number of parameters. Dilated convolutions have previously been successfully applied in semantic segmentation (Yu & Koltun2016), and a similar scheme, minus the parameter sharing, is proposed in (Chen et al. 2016).\nAs shown in Figure4(a), each block is parameterized by a bank of N FxF filters W, applied with. S factors of dilation, and a set of N*S scalars k, which relatively weight the output of each filter a1 each scale. This is naturally and efficiently implemented by reparameterizing a sparsely populated. F+(S-1)*(F-1) filterbank as displayed in Figure 4(b). We propose two variants: Standard MDC where the filter weights are tied to a base W, and Full-Rank MDC, where filters are given the sparse layout of Figure4(b) but the weights are not tied. Selecting Standard versus Full-Rank MDC blocks allows for a design tradeoff between parametric efficiency and model flexibility. In our architecture we replace the hidden layers of the generator with Standard MDC blocks, using F=5 and D=2; we specify MDC blocks by their base filter size and their maximum dilation factor (e.g. 5d2).."}, {"section_index": "5", "section_name": "3.5 ORTHOGONAL REGULARIZATION", "section_text": "Orthogonality is a desirable quality in ConvNet filters, partially because multiplication by an orthogo. nal matrix leaves the norm of the original matrix unchanged. This property is valuable in deep or. recurrent networks, where repeated matrix multiplication can result in signals vanishing or exploding We note the success of initializing weights with orthogonal matrices (Saxe et al.]2014), and posit. that maintaining orthogonality throughout training is also desirable. To this end, we propose a simple. weight regularization technique, Orthogonal Regularization, that encourages weights to be orthogonal. by pushing them towards the nearest orthogonal manifold. We augment our objective with the cost:.\nWhere indicates a sum across all filter banks. W is a filter bank, and I is the identity matrix"}, {"section_index": "6", "section_name": "4 RELATED WORK", "section_text": "The method of iGAN (Zhu et al.]2016) bears the most relation to our interface. The iGAN interface allows a user to impose shape or color constraints on an image of an object through use of a brush\nLortho = (|WwT _ II)\nOur architecture builds directly off of previous VAE/GAN hybrids (Larsen et al.]2015) (Dosovitskiy. & Brox2016), with the key difference being our combination of the discriminator and the encoder. to improve computational and parametric efficiency (by reusing discriminator features) as well as. reconstruction accuracy (as demonstrated in our CelebA ablation studies). The methods of ALI (Dumoulin et al.]2016) and BiGAN (Donahue et al.]2016) provide an orthogonal approach to GAN inference, in which an inference network is trained by an adversarial (as opposed to a variational). process.\nFigure 5: CelebA and SVHN samples\nAnother related interface (Champanard,2016) refines simple user input into complex textures through use of artistic style transfer (Gatys et al.J|2015). Other related work (Whitel 2016) also circumvents the need for labeled attributes by constructing latent vectors by analogy and bias-correcting them"}, {"section_index": "7", "section_name": "5 EXPERIMENTS", "section_text": "We qualitatively evaluate the IAN on 64x64 CelebA (Liu et al.|2015), 32x32 SVHN (Netzer et al. 2011), 32x32 CIFAR-10 (Krizhevsky & Hinton2009), and 64x64 Imagenet (Russakovsky et al. 2015). Our models are implemented in Theano (Team[2016) with Lasagne (Dieleman et al.]2015) Samples from the IAN, randomly selected and shown in Figure[5] display the visual fidelity typical. of adversarially trained networks. The IAN demonstrates high quality reconstructions on previously unseen data, shown in Figure [6] and smooth, plausible interpolations, even between drastically different samples. CIFAR and Imagenet samples, along with additional comparisons to samples from. other models, are available in the appendix.."}, {"section_index": "8", "section_name": "5.1 DISCRIMINATIVE EXPERIMENTS", "section_text": "We quantitatively demonstrate the effectiveness of our MDC blocks and Orthogonal Regularizatior on the CIFAR-100 (Krizhevsky & Hinton] 2009) benchmark. Using standard data augmentation, we train a set of 40-layer, k=12 DenseNets (Huang et al.|2016) for 50 epochs, annealing the learning rate at 25 and 37 epochs. We add varying amounts of Orthogonal Regularization and modify the\nBoth iGAN and the Neural Photo Editor turn coarse user input into refined outputs through use of a. generative model, but the methods differ in several key ways. First, we focus on editing portraits.. rather than objects such as shoes or handbags, and are thus more concerned with modifying features as opposed to overall color or shape, for which our method is less well-suited. Our edit transfer. technique follows this difference as well: we directly transfer the local image changes produced by. the model back onto the original image, rather than estimating and mimicking motion and color flow.\nSecond, our interface applies user edits one step at a time, rather than iteratively optimizing the output This highlights the difference in design approaches: iGAN seeks to produce outputs that best match a given set of user constraints, while we seek to allow a user to guide the latent space traversal.\nFinally, we explicitly tailor our model design to the task at hand and jointly train an inference network which we use at test time to produce reconstructions in a single shot. In contrast, iGAN trains an inference network to minimize the L2 loss after training the generator network, and use the inference network to get an initial estimate of the inferred latents. which are then iteratively optimized\nFigure 6: CelebA and SVHN Reconstructions and Interpolations. The outermost images are originals the adjacent images are reconstructions.\nstandard DenseNet architecture by replacing every 3x3 filterbank with 3d3 MDC blocks, and report. the test error after training in Table|1] In addition, we compare to performance using full 7x7 filters\nThere is a noticeable increase in performance with the progressive addition of our modifications despite a negligible increase in the number of parameters. Adding Orthogonal Regularizatior improves the network's generalization ability; we suspect this is because it encourages the filte weights to remain close to a desirable, non-zero manifold, increasing the likelihood that all of th available model capacity is used by preventing the magnitude of the weights from overly diminishing Replacing 3x3 filters with MDC blocks yields additional performance gains; we suspect this ii due to an increase in the expressive power and receptive field of the network, allowing it to leari longer-range dependencies with ease. We also note that substituting Full-Rank MDC blocks into 40-Layer DenseNet improves performance by a relative 5%, with the only increased computationa cost coming from using the larger filters."}, {"section_index": "9", "section_name": "5.2 EVALUATING MODIFICATIONS", "section_text": "For use in editing photos, a model must produce reconstructions which are photorealistic and feature aligned, and have smooth, plausible interpolations between outputs. We perform an ablation study tc. investigate the effects of our proposals, and employ several metrics to evaluate model quality giver these goals. In this study, we progressively add modifications to a VAE/GAN (Larsen et al.]2015 baseline, and train each network for 50 epochs..\nFor reconstruction accuracy, pixel-wise distance does not tend to correlate well with perceptual similarity. In addition to pixel-wise L2 distance, we therefore compare model reconstruction accuracy in terms of:\nFor use in evaluating the IAN, we additionally train 40-layer, k=12 DenseNets on the CelebA attribute classification task with varying amounts of Orthogonal Regularization. A plot of the train and validation error during training is available in Figure 7. The addition of of Orthogonal Regularization improves the validation error from 6.55% to 4.22%, further demonstrating its utility.\nFeature-wise L, distance in the final layer of a 40-Layer k=12 DenseNet trained for the CelebA attribute classification task.. Trait reconstruction error. We run our classification DenseNet to predict a binary attribute. vector y(X) given an image X, and y(G(E(X))) given a model's reconstruction, then. measure the percent error. Fiducial keypoint Error, measured as the mean L2 distance between the facial landmarks. predicted by the system of (Sankaranarayanan et al.|2016).\nTable 1: Error rates on CIFAR-100+ after 50 epochs\nGauging the visual quality of the model's outputs is notoriously difficult, but the Inception score recently proposed by (Salimans et al.]2016) has been found to correlate positively with human. evaluated sample quality. Using our CelebA attribute classification network in place of the Inceptior Szegedy et al.f[2016) model, we compare the Inception score of each model evaluated on 50,000 random samples. We posit that this metric is also indicative of interpolation quality, as a high visua quality score on a large sample population suggests that the model's output quality remains high regardless of the state of the latent space..\nResults of this ablation study are presented in Table 2] samples and reconstructions from each. configuration are available in the appendix, along with comparisons between a fully-trained IAN. and related models. As with our discriminative experiments, we find that the progressive addition of modifications results in consistent performance improvements across our reconstruction metrics and the Inception score.\nWe note that the single largest gains come from the inclusion of MDC blocks, suggesting that th network's receptive field is a critical aspect of network design for both generative and discriminativ. tasks, with an increased receptive field correlating positively with reconstruction accuracy and sample quality.\nThe improvements from Orthogonal Regularization suggest that encouraging weights to lie close to the orthogonal manifold is beneficial for improving the sample and reconstruction quality of generative neural networks by preventing learned weights from collapsing to an undesirable manifold this is consistent with our experience iterating through network designs, where we have found mode collapse to occur less frequently while using Orthogonal Regularization.\nFinally, the increase in sample quality and reconstruction accuracy through use of the ternary adversarial loss suggests that including the \"reconstructed\" target in the discriminator's objective does lead to the discriminator learning a richer feature space. This comes along with our observations that training with the ternary loss, where we have observed that the generator and discriminator losses tend to be more balanced than when training with the standard binary loss.\nMDC Ortho. Reg Ternary Pixel Feature Trait(%) Keypoint Inception VAE/GAN Baseline 0.295 4.86 0.197 2.21 1389(64) x x x 0.285 4.76 0.189 2.11 1772(37) x x 0.258 4.67 0.182 1.79 2160(70) J x x 0.248 4.69 0.172 1.54 2365(97) J J x 0.230 4.39 0.165 1.47 3158(98) x x J 0.254 4.60 0.177 1.67 2648(69 x J 0.239 4.51 0.164 1.57 3161(70) J x J 0.221 4.37 0.158 0.99 3300(123) J 0.192 4.33 0.155 0.97 3627(146 V\nTable 3: Error rates on Semi-Supervised SVHN with 1000 training examples. Figure 7: Performance on CelebA Classification task with varying Orthogonal Regularization..\nWe quantitatively evaluate the inference abilities of our architecture by applying it to the semi supervised SVHN classification task using two different procedures. We first evaluate using the procedure of (Radford et al.]2015) by training an L2-SVM on the output of the FC layer of the encoder subnetwork, and report average test error and standard deviation across 100 different SVMs each trained on 1000 random examples from the training set.\nNext, we use the procedure of (Salimans et al.2016), where the discriminator outputs a distribution. over the K object categories and an additional \"fake\" category, for a total of K+1 outputs. The discriminator is trained to predict the category when given labeled data, to assign the \"fake\" label. when provided data from the generator, and to assign k E {1, ..., K} when provided unlabeled real. data. We modify feature-matching based Improved-GAN to include the encoder subnetwork and. reconstruction losses detailed in Section[3] but do not include the ternary adversarial loss..\nOur performance, as shown in Table|3] is competitive with other networks evaluated in these fashions achieving 18.5% mean classification accuracy when using SVMs and 8.34% accuracy when using the method of Improved-GAN. When using SVMs, our method tends to demonstrate improvement over previous methods, particularly over standard VAEs. We believe this is due to the encoder subnetwork being based on more descriptive features (i.e. those of the discriminator), and therefore better suitec to discriminating between SVHN classes.\nWe introduced the Neural Photo Editor, a novel interface for exploring the learned latent space of. generative models and for making specific semantic changes to natural images. Our interface makes. use of the Introspective Adversarial Network, a hybridization of the VAE and GAN that outputs high. fidelity samples and reconstructions, and achieves competitive performance in a semi-supervised. classification task. The IAN makes use of Multiscale Dilated Convolution Blocks and Orthogona Regularization, two improvements designed to improve model expressivity and feature quality for. convolutional networks."}, {"section_index": "10", "section_name": "ACKNOWLEDGMENTS", "section_text": "This research was made possible by grants and support from Renishaw plc and the Edinburgh. Centre For Robotics. The work presented herein is also partially funded under the European H2020 Programme BEACONING project, Grant Agreement nr. 687676.\nMethod Error rate 15 Baseline Train Erro 14 VAE (M1 + M2) (Kingma et al.2014) 36.02% Le-1 Ortho.Reg. . Train Error 13 SWwAE with dropout (Zhao et al. 2015 23.56% 1e-3 Ortho.Reg.Valid Error 12 1e-1 Ortho.Reg.Valid Error DCGAN + L2-SVM (Radford et al 2015 22.18%(1.13%) 11 10 SDGM (Maale et al.f|2016) 16.61%(0.24%) Error ALI (L2-SVM) (Dumoulin et al. 2016 19.14%(0.50%) IAN (ours, L2-SVM) 18.50%(0.38%) IAN (ours, Improved-GAN) 8.34%(0.91%) Improved-GAN (Salimans et al. 2016 8.11%(1.3%) 5 10 15 20 25 30 35 40 45 50 ALI (Improved-GAN) 7.3% Epochs Table 3i Figure 7\nWe find the lack of improvement when using the method of Improved-GAN unsurprising, as the IAN architecture does not change the goal of the discriminator; any changes in behavior are thus. indirectly due to changes in the generator, whose loss is only slightly modified from feature-matching Improved-GAN."}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "A.J. Champanard. Semantic style transfer and turning two-bit doodles into fine artwork. arXi Preprint arXiv: 1603.01768, 2016.\nreprint arX1v: 1603.01 768 L-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, , and A. L. Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. arXiv Preprint arXiv:1606.00915, 2016. S. Dieleman, J. Schluter, C. Raffel, E. Olson, S.K. Sonderby, D. Nouri, and E. Battenberg. Lasagne Firstrelease.,2015. URLhttp://dx.doi.0rg/10.5281/zenodo.27878. J. Donahue, P. Krahenbuhl, and T. Darrell. Adversarial feature learning. arXiv preprint arXiv:1605.09782, 2016. A. Dosovitskiy and T. Brox. Generating images with perceptual similarity metrics based on deep networks. arXiv Preprint arXiv:1602.02644, 2016. V. Dumoulin, I. Belghazi, B. Poole, A. Lamb, M. Arjovsky, O. Mastropietro, and A. Courville Adversarially learned inference. arXiv Preprint arXiv: 1606.0070, 2016. L.A. Gatys, A.S. Ecker, and M. Bethge. A neural algorithm of artistic style. arXiv Preprint arXiv 1508.06576, 2015. I. Goodfellow, J. Pouget-Abadie, Jean, M. Mehdi, X. Bing, D. Warde-Farley, S. Ozair, A. Courville and Y. Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems pp. 2672-2680, 2014. G. Huang, Z. Liu, K.Q. Weinberger, and L. van der Maaten. Densely connected convolutional networks. arXiv Preprint arXiv:1608.06993, 2016. S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML 2015. 2015. D.P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv Preprint arXiv 1412.6980, 2014. D.P. Kingma and M. Welling. Auto-encoding variational bayes. In ICLR 2014, 2014.\nD.P. Kingma and M. Welling. Auto-encoding variational bayes. In ICLR 2014, 2014\nA. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images, 2009\nmetric. arxiv preprint drx7 Z. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild. In Proceedings of. the IEEE International Conference on Computer Vision, pp. 3730-3738, 2015. L. Maalge, C.K. Sonderby, S.K. Spnderby, and O. Winther. Auxiliary deep generative models. arXiv preprint arXiv:1602.05473, 2016. Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A.Y. Ng. Reading digits in natural images. with unsupervised feature learning. In NIPS workshop on deep learning and unsupervised feature. learning, volume 2011, pp. 4. Granada, Spain, 2011. A. Odena, C. Olah, and J. Shiens. Conditional image synthesis with auxiliary classifier gans. arXiv. Preprint arXiv: 1610.09585, 2016. A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional. generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.\nO. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla. M. Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211-252, 2015. T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. arXiv Preprint arXiv: 1606.03498, 2016. S. Sankaranarayanan, R. Ranjan, C. D. Castillo, and R. Chellappa. An all-in-one convolutional neural network for face analysis. arXiv Preprint arXiv:1611.00851, 2016.\nr. White. Sampling generative networks. arXiv Preprint arXiv:1609.04468, 2016\nF. Yu and V. Koltun. Multi-scale context aggregation by dilated convolutions. In ICLR 2016, 2016. J. Zhao, M. Mathieu, R. Goroshin, and Y. Lecun. Stacked what-where auto-encoders. arXiv preprint arXiv:1506.02351, 2015. J.-Y. Zhu, P. Krahenbuhl, E. Shechtman, and A. A. Efros. Generative visual manipulation on the natural image manifold. In ECCV 2016, 2016.\nTable 4: Reconstructions and samples from CelebA ablation Study\nMDC Ortho. Reg. Ternary Recon1 Recon2 Sample1 Sample 2 Sample 3 Original VAE/GAN Baseline X X X X X + X X X\nFigure 9: Samples, reconstructions, and interpolations on Imagenet. Top three rows: samples bottom three rows: reconstructions and interpolations. Our model achieves an Inception score of 8.56(0.09).\nFigure 8: Samples, reconstructions, and interpolations on CIFAR-10. Top three rows: samples. bottom three rows: reconstructions and interpolations. Our model achieves an Inception score of 6.88(0.08), on par with the 6.86(0.06) achieved by Improved-GAN with historical averaging."}] |
BJ--gPcxl | [{"section_index": "0", "section_name": "SEMI-SUPERVISED LEARNING WITH CONTEXT-CONDITIONAL GENERATIVI ADVERSARIAL NETWORKS", "section_text": "Emily Denton\nWe introduce a simple semi-supervised learning approach for images based on in-painting using an adversarial loss. Images with random patches removed are presented to a generator whose task is to fill in the hole, based on the surrounding pixels. The in-painted images are then presented to a discriminator network that judges if they are real (unaltered training images) or not. This task acts as a reg- ularizer for standard supervised training of the discriminator. Using our approach we are able to directly train large VGG-style networks in a semi-supervised fash- ion. We evaluate on STL-1O and PASCAL datasets, where our approach obtains performance comparable or superior to existing methods."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Deep neural networks have yielded dramatic performance gains in recent years on tasks such as object classification (Krizhevsky et al.2012), text classification (Zhang et al.]2015) and machine translation (Sutskever et al.[2014) |Bahdanau et al.[2015). These successes are heavily dependent on large training sets of manually annotated data. In many settings however, such large collections of. labels may not be readily available, motivating the need for methods that can learn from data where. labels are rare.\nWe propose a method for harnessing unlabeled image data based on image in-painting. A generative model is trained to generate pixels within a missing hole, based on the context provided by surround ing parts of the image. These in-painted images are then used in an adversarial setting (Goodfellow et al.[2014) to train a large discriminator model whose task is to determine if the image was real (from the unlabeled training set) or fake (an in-painted image). The realistic looking fake examples provided by the generative model cause the discriminator to learn features that generalize to the related task of classifying objects. Thus adversarial training for the in-painting task can be used tc regularize large discriminative models during supervised training on a handful of labeled images."}, {"section_index": "2", "section_name": "1.1 RELATED WORK", "section_text": "Sam Gross Facebook AI Researcl New York. sgross@fb.com"}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "Learning From Context: The closest work to ours is the independently developed context-encoder. approach of|Pathak et al.(2016). This introduces an encoder-decoder framework, shown in Fig.1[a), that is used to in-paint images where a patch has been randomly removed. After using this as a. pre-training task, a classifier is added to the encoder and the model is fine-tuned using the labeled. examples. Although both approaches use the concept of in-painting, they differ in several important. ways. First, the architectures are different (see Fig.1): in Pathak et al.(2016), the features for. the classifier are taken from the encoder, whereas ours come from the discriminator network. In practice this makes an important difference as we are able to directly train large models such as. VGG (Simonyan & Zisserman2015) using adversarial loss alone. By contrast,Pathak et al.[(2016) report difficulties in training an AlexNet encoder with this loss. This leads to the second difference,. namely that on account of these issues, they instead employ an l, loss when training models for. classification and detection (however they do use a joint l2 and adversarial loss to achieve impressive. in-painting results). Finally, the unsupervised learning task differs between the two models. The. context-encoder learns a feature representation suitable for in-painting whereas our model learns a feature representation suitable for differentiating real/fake in-paintings. Notably, while we also. use a neural network to generate the in-paintings, this model is only used as an adversary for the.\nclassification real/fake classification real/fake real/fake classification real/fake L2 loss loss loss loss loss loss loss loss 1 Discriminator Discriminator Discriminator Discriminator Decoder x,y ~ Pdata(x,y) x,y ~ Pdata(x,y) Generator Encoder Z ~ Pnoise(Z) Generator z ~ Pnoise(Z) x,y ~ Pdata(x,y) x,y ~ Pdata(x,y) (a) (b) (c)\ndiscriminator, rather than as a feature extractor. In section 4, we compare the performance of ou model to the context-encoder on the PASCAL dataset..\nDosovitskiy et al.[(2014) achieved state-of-the-art results by training a CNN with a different class. for each training example and introducing a set of transformations to provide multiple examples. per class. The pseudo-label approach (Lee, 2013) is a simple semi-supervised method that trains using the maximumly predicted class as a label when labels are unavailable. Springenberg(2015 propose a categorical generative adversarial network (CatGAN) which can be used for unsupervisec. and semi-supervised learning. The discriminator in a CatGAN outputs a distribution over classes and is trained to minimize the predicted entropy for real data and maximize the predicted entropy. for fake data. Similar to our model, CatGANs use the feature space learned by the discriminator for. the final supervised learning task. Salimans et al.(2016) recently proposed a semi-supervised GAN model in which the discriminator outputs a softmax over classes rather than a probability of real vs fake. An additional 'generated' class is used as the target for generated samples. This method differs. from our work in that it does not utilize context information and has only been applied to datasets. of small resolution. However, the discriminator loss is similar to the one we propose and could be. combined with our context-conditional approach.\nMore traditional semi-supervised methods include graph-based approaches (Zhou et al.]2004) Zhu 2006) that show impressive performance when good image representations are available. However the focus of our work is on learning such representations..\nGenerative models of images: Restricted Boltzmann machines (Salakhutdinov]2015), de-noising. autoencoders (Vincent et al.]2008) and variational autoencoders (Kingma & Welling2014) opti mize a maximum likelihood criterion and thus learn decoders that map from latent space to imag space. More recently, generative adversarial networks (Goodfellow et al.|. 2014) and generative mo\nFigure 1: (a) Context-encoder ofPathak et al.(2016), configured for object classification task. (b). Semi-supervised learning with GANs (SSL-GAN). (c) Semi-supervised learning with CC-GANs In (a-c) the blue network indicates the feature representation being learned (encoder network in the. context-encoder mode1 and discriminator network in the GAN and CC-GAN models)..\nOther forms of spatial context within images have recently been utilized for representation learning. Doersch et al.(2015) propose training a CNN to predict the spatial location of one image patch. relative to another.Noroozi & Favaro[(2016) propose a model that learns by unscrambling image. patches, essentially solving a jigsaw puzzle to learn visual representations. In the text domain. context has been successfully leveraged as an unsupervised criterion for training useful word and sentence level representations (Collobert et al.||2011] Mikolov et al.]. 2015|Kiros et al.2015).\nDeep unsupervised and semi-supervised learning: A popular method of utilizing unlabeled data. is to layer-wise train a deep autoencoder or restricted Botlzmann machine (Hinton et al.||2006) and. then fine tune with labels on a discriminative task. More recently, several autoencoding variants have been proposed for unsupervised and semi-supervised learning, such as the ladder network (Rasmus et al.]2015), stacked what-where autoencoders (Zhao et al.]2016) and variational autoencoders (Kingma & Welling2014)Kingma et al.]2014).\nOther models used recurrent approaches to generate images (Gregor et al. 2015 Theis & Bethge 2015] [Mansimov et al.2016] |van den Oord et al.[[2016).Dosovitskiy et al. (2015) trained a CNN to generate objects with different shapes, viewpoints and color. Sohl-Dickstein et al.(2015) propose a generative model based on a reverse diffusion process. While our model does involve image gener- ation, it differs from these approaches in that the main focus is on learning a good representation for classification tasks.\nPredictive generative models of videos aim to extrapolate from current frames to future ones and. in doing so learn a feature representation that is useful for other tasks. In this vein, Ranzato et al. (2014) used an l2-loss in pixel-space.Mathieu et al.(2015) combined an adversarial loss with l2 giving models that generate crisper images. While our model is also predictive, it only considers. interpolation within an image, rather than extrapolation in time.\nThe generative adversarial network approach (Goodfellow et al.2014) is a framework for training. generative models, which we briefly review. It consists of two networks pitted against one another in a two player game: A generative model, G, is trained to synthesize images resembling the data distribution and a discriminative model, D, is trained to distinguish between samples drawn from G and images drawn from the training data..\nMore formally, let I' = {x1, ..., xn) be a dataset of images of dimensionality d. Let D denote a discriminative function that takes as input an image x E Rd and outputs a scalar representing the. probability of input x being a real sample. Let G denote the generative function that takes as input a random vector z E R sampled from a prior noise distribution pNoise and outputs a synthesized. image x = G(z) E Rd. Ideally, D(x) = 1 when x E and D(x) = 0 when x was generated from G. The GAN objective is given by:.\nThe conditional generative adversarial network (Mirza & Osindero 2014) is an extension of the GAN in which both D and G receive an additional vector of information y as input. The conditional GAN objective is given by:\nWe propose context-conditional generative adversarial networks (CC-GANs) which are conditiona GANs where the generator is trained to fill in a missing image patch and the generator and discrim-. inator are conditioned on the surrounding pixels..\nIn particular, the generator G receives as input an image with a randomly masked out patch. The generator outputs an entire image. We fill in the missing patch from the generated output and then. pass the completed image into D. We pass the completed image into D rather than the context and. the patch as two separate inputs so as to prevent D from simply learning to identify discontinuities. along the edge of the missing patch..\nment matching networks (Li et al.2015 [Dziugaite et al.[[2015) have been proposed. These methods. ignore data likelihoods and instead directly train a generative model to produce realistic samples.. Several extensions to the generative adversarial network framework have been proposed to scale the approach to larger images (Denton et al.2015 Radford et al.2016 Salimans et al.]2016).Our work draws on the insights of Radford et al.(2016) regarding adversarial training practices and ar-. chitecture for the generator network, as well as the notion that the discriminator can produce useful. features for classification tasks..\nWe present a semi-supervised learning framework built on generative adversarial networks (GANs) of |Goodfellow et al.(2014). We first review the generative adversarial network framework and then introduce context conditional generative adversarial networks (CC-GANs). Finally, we show how combining a classification objective and a CC-GAN objective provides a unified framework for semi-supervised learning.\nmin max I Ex~x[log D(x)] + Ez~pnoise(z)[log(1 - D(G(z))] G D\nmin max Ex,y~x[log D(x,y)] + Ez~pnose(z)[log(1 - D(G(z,y),x)] G D\nMore formally, let m E Rd denote to a binary mask that will be used to drop out a specified portion. of an image. The generator receives as input m O x where O denotes element-wise multiplication The generator outputs xg = G(m O x, z) E Rd and the in-painted image xt is given by:.\nx1 =(1-m)Oxg+mO x\nmin max Ex~x[log D(x)] + Ex~X,m~M[log(1 - D(x1)) D"}, {"section_index": "4", "section_name": "2.3 COMBINED GAN AND CC-GAN", "section_text": "While the generator of the CC-GAN outputs a full image, only a portion of it (corresponding to the missing hole) is seen by the discriminator. In the combined model, which we denote by CC-GAN2 the fake examples for the discriminator include both the in-painted image x1 and the full image xG produced by the generator (i.e. not just the missing patch). By combining the GAN and CC-GAN approaches, we introduce a wider array of negative examples to the discriminator. The CC-GAN2 objective given by:\nmin max Ex~x[log D(x)] G D + Ex~X,m~M[log(1- D(x1))] + Ex~X,m~M[log(1- D(xG))"}, {"section_index": "5", "section_name": "2.4 SEMI-SUPERVISED LEARNING WITH CC-GANS", "section_text": "A common approach to semi-supervised learning is to combine a supervised and unsupervised ob. jective during training. As a result unlabeled data can be leveraged to aid the supervised task\nIntuitively, a GAN discriminator must learn something about the structure of natural images in orde to effectively distinguish real from generated images. Recently, Radford et al.[(2016) showed that GAN discriminator learns a hierarchical image representation that is useful for object classificatior. Such results suggest that combining an unsupervised GAN objective with a supervised classificatio objective would produce a simple and effective semi-supervised learning method. This approacl denoted by SSL-GAN, is illustrated in Fig.[1(b). The discriminator network receives a gradient fror. the real/fake loss for every real and generated image. The discriminator also receives a gradient fror. the classification loss on the subset of (real) images for which labels are available..\nGenerative adversarial networks have shown impressive performance on many diverse datasets However, samples are most coherent when the set of images the network is trained on comes from a limited domain (eg. churches or faces). Additionally, it is difficult to train GANs on very large images. Both these issues suggest semi-supervised learning with vanilla GANs may not scale well to datasets of large diverse images. Rather than determining if a full image is real or fake, contex1 conditional GANs address a different task: determining if a part of an image is real or fake given the surrounding context.\nin max Ex~x[log D(x)] + Ex~X,m~M[log(1 D(x1))]+AcEx,y~xc[log(Dc(y|x))]\nThe hyperparameter c balances the classification and adversarial losses. We only consider the CC GAN in the semi-supervised setting and thus drop the SSL notation when referring to this model.\nThe architecture of our generative model, G, is inspired by the generator architecture of the DCGAN (Radford et al.]2016). The model consists of a sequence of convolutional layers with subsampling (but no pooling) followed by a sequence of fractionally-strided convolutional layers. For the discrim inator, D, we used the VGG-A network (Simonyan & Zisserman2015) without the fully connectec layers (which we call the VGG-A' architecture). Details of the generator and discriminator are given\nclassification real/fake loss loss conv (1, 4x4) conv (1, 4x4) Output size: 4 pool (2x2) conv (512, 3x3) conv (512, 3x3) Output size: 8 pool (2x2) conv (512, 3x3) Output size: 128 upconv (3, 4x4, 2x2) conv (512, 3x3) Output size: 64 upconv (64, 4x4, 2x2) Output size: 16 pool (2x2) T Output size: 32 upconv (128, 4x4, 2x2) conv (256, 3x3) Output size: 16 upconv (256, 4x4, 2x2) conv (256, 3x3) Output size: 8 conv (512, 4x4, 2x2) Output size: 32 pool (2x2) Output size: 16 conv (256, 4x4, 2x2) conv (128, 3x3) Output size: 32 conv (128, 4x4, 2x2) pool (2x2) Output size: 64 Output size: 64 conv (64, 4x4, 2x2) conv (64, 3x3) Low res image: 32x32 (optional) Context image: In-painted/real image: 128x128 128x128\nFigure 2: Architecture of our context-conditional generative adversarial network (CC-GAN) conv (64, 4x4, 2x2) denotes a conv layer with 64 channels, 4x4 kernels and stride 2x2. Each convolution layer is followed by a spatial batch normalization and rectified linear layer. Dashed lines indicate optional pathways.\nin Fig.2 The input to the generator is an image with a patch zeroed out. In preliminary experiments we also tried passing in a separate mask to the generator to make the missing area more explicit bu found this did not effect performance.\nEven with the context conditioning it is difficult to generate large image patches that look realis tic, making it problematic to scale our approach to high resolution images. To address this, we propose conditioning the generator on both the high resolution image with a missing patch and a. low resolution version of the whole image (with no missing region). In this setting, the generators task becomes one of super-resolution on a portion of an image. However, the discriminator does not receive the low resolution image and thus is still faced with the same problem of determining if a given in-painting is viable or not. Where indicated, we used this approach in our PAsCAI. VOC 2007 experiments, with the original image being downsampled by a factor of 4. This provided. enough information for the generator to fill in larger holes but not so much that it made the task trivial. This optional low resolution image is illustrated in Fig.2(left) with the dotted line..\nWe followed the training procedures of|Radford et al.(2016). We used the Adam optimizer (Kingma. & Ba[2015) in all our experiments with learning rate of 0.0002, momentum term 1 of 0.5, and the. remaining Adam hyperparameters set to their default values. We set Ac = 1 for all experiments"}, {"section_index": "6", "section_name": "Method", "section_text": "Multi-task Bayesian Optimization (Swersky et al. 2013 70.10 0.6 Exemplar CNN (Dosovitskiy et al. 2014) 75.40 0.3 Stacked What-Where Autoencoder 7Zhao et al. 2016 74.33 Supervised VGG-A 61.19 1.1 SSL-GAN 73.81 0.5 CC-GAN 75.67 0.5 CC-GAN2 77.79 0.8\nTable 1: Comparison of CC-GAN and other published results on STL-10."}, {"section_index": "7", "section_name": "3.1 STL-10 CLASSIFICATION", "section_text": "STL-10 is a dataset of 9696 color images with a 1:100 ratio of labeled to unlabeled examples. making it an ideal fit for our semi-supervised learning framework. The training set consists of. 5000 labeled images, mapped to 10 pre-defined folds of 1000 images each, and 100,000 unlabeled. images. The labeled images belong to 10 classes and were extracted from the ImageNet dataset and. the unlabeled images come from a broader distribution of classes. We follow the standard testing. protocol and train 10 different models on each of the 10 predefined folds of data. We then evaluate. classification accuracy of each model on the test set and report the mean and standard deviation\nWe trained our CC-GAN and CC-GAN2 models on 6464 crops of the 9696 image. The hole was 32 32 pixels and the location of the hole varied randomly (see Fig.3(top). We trained for 100 epochs and then fine-tuned the discriminator on the 96x96 labeled images, stopping when training accuracy reached 100%. As shown in Table[1] the CC-GAN model performs comparably to current state of the art (Dosovitskiy et al.]2014) and the CC-GAN2 model improves upon it.\nWe also trained two baseline models in an attempt to tease apart the contributions of adversarial training and context conditional adversarial training. The first is a purely supervised training of the VGG-A' model (the same architecture as the discriminator in the CC-GAN framework). This was trained using a dropout of O.5 on the final layer and weight decay of O.001. The performance of this model is significantly worse than the CC-GAN model.\nWe also trained a semi-supervised GAN (SSL-GAN, see Fig. 1(b)) on STL-10. This consisted of the same discriminator as the CC-GAN (VGG-A' architecture) and generator from the DCGAN model (Radford et al.]2016). The training setup in this case is identical to the CC-GAN model. The SSL- GAN performs almost as well as the CC-GAN, confirming our hypothesis that the GAN objective is a useful unsupervised criterion..\nIn order to compare against other methods that utilize spatial context we ran the CC-GAN model or PASCAL VOC 2007 dataset. This dataset consists of natural images coming from 20 classes. The dataset contains a large amount of variability with objects varying in size, pose, and position. The training and validation sets combined contain 5,011 images, and the test set contains 4,952 images The standard measure of performance is mean average precision (mAP).\nWe trained each model on the combined training and validation set for ~5oo0 epochs and evaluatec on the test set once' FollowingPathak et al.(2016), we train using random cropping, and then evaluate using the average prediction from 10 random crops\nOur best performing model was trained on images of resolution 128 128 with a hole size of 64 64 and a low resolution input of size 3232. Table 2|compares our CC-GAN method to other feature learning approaches on the PASCAL test set. It outperforms them, beating the current state of the art (Wang & Gupta]2015) by 3.8%. It is important to note that our feature extractor is the VGG A' model which is larger than the AlexNet architecture (Krizhevsky et al.]2012) used by othe approaches in Table2] However, purely supervised training of the two models reveals that VGG-A\n' Hyperparameters were determined by initially training on the training set alone and measuring performanc on the validation set."}, {"section_index": "8", "section_name": "Method", "section_text": "Method mAP Supervised AlexNet. 53.3 % Visual tracking from video (Wang & Gupta 2015 58.4% Context prediction (Doersch et al.||2015) 55.3% Context encoders (Pathak et al.)2016) 56.5% Supervised VGG-A 55.2% CC-GAN 62.2% CC-GAN2 62.7%\nTable 2: Comparison of CC-GAN and other methods (as reported byPathak et al.(2016)) on PAS CAL VOC 2007.\nMethod Image size Hole size Low res size. mAP Supervised VGG-A' 64 x 64 52.97% CC-GAN 64 64 3232 56.79% Supervised VGG-A' 9696 55.22% CC-GAN 9696 48x48 60.38% CC-GAN 9696 48x48 24 x 24 60.98% Supervised VGG-A' 128128 55.2% CC-GAN 128128 64 x 64 61.3% CC-GAN 128128 64 x 64 32 32 62.2%\nTable 3: Comparison of different CC-GAN variants on PASCAL VOC 2007.\nis less than 2% better than AlexNet. Furthermore, our model outperforms the supervised VGG-A baseline by a 7% margin (62.2% vs. 55.2%). This suggests that our gains stem from the CC-GAN method rather than the use of a better architecture\nTable3|shows the effect of training on different resolutions. The CC-GAN improves over the base-. line CNN consistently regardless of image size. We found that conditioning on the low resolution image began to help when the hole size was largest (64 64). We hypothesize that the low resolution conditioning would be more important for larger images, potentially allowing the method to scale to. larger image sizes than we explored in this work.."}, {"section_index": "9", "section_name": "3.3 INPAINTING", "section_text": "We now show some sample in-paintings produced by our CC-GAN generators. In our semi- supervised learning experiments on STL-10 we remove a single fixed size hole from the image The top row of Fig.3|shows in-paintings produced by this model. We can also explored different masking schemes as illustrated in the remaining rows of Fig.3|(however these did not improve clas- sification results). In all cases we see that training the generator with the adversarial loss produces sharp semantically plausible in-painting results.\nFig. 4 shows generated images and in-painted images from a model trained with the CC-GAN2 criterion. The output of a CC-GAN generator tends to be corrupted outside the patch used to in paint the image (since gradients only flow back to the missing patch). However, in the CC-GAN2 model, we see that both the in-painted image and the generated image are coherent and semantically consistent with the masked input image.\nFig.5|shows in-painted images from a generator trained on 128 128 PASCAL images. Fig.6shows the effect of adding a low resolution (3232) image as input to the generator. For comparison we also show the result of in-painting by filling in with a bi-linearly upsampled image. Here we see the generator produces high-frequency structure rather than simply learning to copy the low resolution patch."}, {"section_index": "10", "section_name": "4 DISCUSSION", "section_text": "We have presented a simple semi-supervised learning framework based on in-painting with an adver sarial loss. The generator in our CC-GAN model is capable of producing semantically meaningfu\nFigure 3: STL-1O in-painting with CC-GAN training and varying methods of dropping out th image.\nMssse nnmee Crner\nFigure 4: STL-10 in-painting with combined CC-GAN2 training\nFigure 5: PASCAL in-painting with CC-GAN\nFigure 6: PASCAL in-painting with CC-GAN conditioned on low resolution image. Top two rows show input to generator. Third row shows inpainting my bilinear upsampling. Bottom row shows inpainted image by generator.\nlmnmee lmnae lnnmee Naase lnnneee\nlnnmee Cennneor onrrnf\nLoorrs Sunnnndn! Riinar\nin-paintings and the discriminator performs comparable to or better than existing semi-supervised methods on two classification benchmarks..\nSince discrimination of real/fake in-paintings is more closely related to the target task of object. classification than extracting a feature representation suitable for in-filling, it is not surprising that we. are able to exceed the performance of Pathak et al.(2016) on PASCAL classification. Furthermore,. since our model operates on images half the resolution as those used by other approaches (128 128 vs. 224 244), there is potential for further gains if improvements in the generator resolution can be. made. Our models and code are available at https : //github. com/edenton/cc-gan\nAcknowledgements: Emily Denton is supported by a Google Fellowship. Rob Fergus is gratefu for the support of CIFAR\nAlexey Dosovitskiy, Jost Tobias Springenberg, and Thomas Brox. Learning to generate chairs, tables and car with convolutional networks. In Computer Vision and Pattern Recognition, 2015.\nKarol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. Draw: A recurrer neural network for image generation. In International Conference on Machine Learning, 2015..\nYujia Li, Kevin Swersky, and Richard Zemel. Generative moment matching networks. In ICML, 2015\nEFERENCES D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. In The International Conference on Learning Representations, 2015. R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 2011. Emily Denton, Soumith Chintala, Arthur Szlam, and Rob Fergus. Deep generative image models using a laplacian pyramid of adversarial networks. In Advances in Neural Information Processing Systems 28, 2015. Carl Doersch, Abhinav Gupta, and Alexei A. Efros. Unsupervised visual representation learning by context prediction. In International Conference on Computer Vision, 2015. Alexey Dosovitskiy, Jost Tobias Springenberg, Martin Riedmiller, and Thomas Brox. Discriminative unsuper- vised feature learning with convolutional neural networks. In Advances in Neural Information Processing Svstems 27 2014\nGintare Karolina Dziugaite, Daniel M. Roy, and Zoubin Ghahramani. Training generative neural networks via maximum mean discrepancy optimization. In Uncertainty in Artificial Intelligence. 2015\nRyan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Antonio Torralba, Raquel Urtasun, anc Sanja Fidler. Skip-thought vectors. In Advances in Neural Information Processing Systems 28, 2015.\nAlex Krizhevsky, Ilya Sutskever, and Geoff Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25, pp. 1106-1114, 2012.\nElman Mansimov, Emilio Parisotto, Jimmy Ba, and Ruslan Salakhutdinov. Generating images from captions with attention. In The International Conference on Learning Representations, 2016..\nMichael Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyond mean square error. arXiv 1511.05440, 2015.\nT. Mikolov. I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of words and phrase and their compositionality. In Advances in Neural Information Processing Systems 28, 2015.\nMehdi Mirza and Simon Osindero. Conditional generative adversarial nets. CoRR, abs/1411.1784, 2014.\nMehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles CoRR, abs/1603.09246, 2016.\nDeepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A. Efros. Context encoder Feature learning by inpainting. In Computer Vision and Pattern Recognition, 2016.\nMarc'Aurelio Ranzato, Arthur Szlam, Joan Bruna, Michael Mathieu, Ronan Collobert, and Sumit Chopr Video (language) modeling: a baseline for generative models of natural videos. arXiv 1412.6604, 2014.\nRuslan Salakhutdinov. Learning deep generative models. Annual Review of Statistics and Its Application, 2 361-385, 2015.\nTim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improve techniques for training gans. In Advances in Neural Information Processing Systems 29, 2016.\nJost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generative adversaria networks. arXiv 1511.06390, 2015.\nIlya Sutskever, Oriol Vinyals, and Quoc Le. Sequence to sequence learning with neural networks. In Advance in Neural Information Processing Systems 27, 2014.\nXiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text classification. Ir Advances in Neural Information Processing Systems 28, 2015..\nXiaojin Zhu. Semi-supervised learning literature survey, 2006\nAlec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In The International Conference on Learning Representations, 2O16\nAntti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi-supervised learning with ladder network. In Advances in Neural Information Processing Systems 28, 2015..\nJascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. CoRR, abs/1503.03585. 2015.\nKevin Swersky, Jasper Snoek, and Ryan P. Adams. Multi-task bayesian optimization. In Advances in Neural Information Processing Systems 26, 2013."}] |
Hy6b4Pqee | [{"section_index": "0", "section_name": "DEEP PROBABILISTIC PROGRAMMING", "section_text": "Dustin Tran\nMatthew D. Hoffiman\nColumbia University\nAdobe Research\nKevin Murphy. Google Researcl\nKevin Murphy\nGoogle Research\nGoogle Brain\nWe propose Edward, a Turing-complete probabilistic programming language. Ed. ward defines two compositional representations-random variables and inference By treating inference as a first class citizen, on a par with modeling, we show that. probabilistic programming can be as flexible and computationally efficient as tra. ditional deep learning. For flexibility, Edward makes it easy to fit the same model. using a variety of composable inference methods, ranging from point estimation. to variational inference to Mcmc. In addition, Edward can reuse the modeling. representation as part of inference, facilitating the design of rich variational mod-. els and generative adversarial networks. For efficiency, Edward is integrated into. TensorFlow, providing significant speedups over existing probabilistic systems. For example, we show on a benchmark logistic regression task that Edward is at least 35x faster than Stan and 6x faster than PyMC3. Further, Edward incurs nc runtime overhead: it is as fast as handwritten TensorFlow.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The nature of deep neural networks is compositional. Users can connect layers in creative ways without having to worry about how to perform testing (forward propagation) or inference (gradient based optimization, with back propagation and automatic differentiation).\nIn this paper, we design compositional representations for probabilistic programming. Probabilis. tic programming lets users specify generative probabilistic models as programs and then \"compile' those models down into inference procedures. Probabilistic models are also compositional in na- ture, and much work has enabled rich probabilistic programs via compositions of random variables (Goodman et al., 2012; Ghahramani, 2015; Lake et al., 2016).\nLess work, however, has considered an analogous compositionality for inference. Rather, many ex- isting probabilistic programming languages treat the inference engine as a black box, abstracted away from the model. These cannot capture probabilistic inferences that reuse the model's representation-a key idea in recent advances in variational inference (Kingma & Welling, 2014; Rezende & Mohamed, 2015; Tran et al., 2016b), generative adversarial networks (Goodfellow et al.,. 2014), and also in more classic inferences (Dayan et al., 1995; Gutmann & Hyvarinen, 2010)..\nWe propose Edward', a Turing-complete probabilistic programming language which builds on twc compositional representations-one for random variables and one for inference. By treating infer ence as a first class citizen, on a par with modeling, we show that probabilistic programming car be as flexible and computationally efficient as traditional deep learning. For fexibility, we show how Edward makes it easy to fit the same model using a variety of composable inference methods ranging from point estimation to variational inference to Mcmc. For efficiency, we show how tc integrate Edward into existing computational graph frameworks such as TensorFlow (Abadi et al. 2016). Frameworks like TensorFlow provide computational benefits like distributed training, paral lelism, vectorization, and GPU support \"for free.\" For example, we show on a benchmark task tha Edward's Hamiltonian Monte Carlo is many times faster than existing software. Further, Edwarc incurs no runtime overhead: it is as fast as handwritten TensorFlow.\n1See Tran et al. (2016a) for details of the API. A companion webpage for this paper is available at ht tp. //edward1ib.org/ic1r2017. It contains more complete examples with runnable code\nRif A. Saurous Google Research\nRif A. Saurous\nGoogle Research\nColumbia University"}, {"section_index": "2", "section_name": "2 RELATED WORK", "section_text": "Probabilistic programming languages (ppLs) typically trade off the expressiveness of the language with the computational efficiency of inference. On one side, there are languages which emphasize expressiveness (Pfeffer, 2001; Milch et al., 2005; Pfeffer, 2009; Goodman et al., 2012), representing a rich class beyond graphical models. Each employs a generic inference engine, but scales poorly with respect to model and data size. On the other side, there are languages which emphasize effi- ciency (Spiegelhalter et al., 1995; Murphy, 2001; Plummer, 2003; Salvatier et al., 2015; Carpenter et al., 2016). The PPL is restricted to a specific class of models, and inference algorithms are opti- mized to be efficient for this class. For example, Infer.NET enables fast message passing for graphi- cal models (Minka et al., 2014), and Augur enables data parallelism with GPUs for Gibbs sampling in Bayesian networks (Tristan et al., 2014). Edward bridges this gap. It is Turing complete -it sup- ports any computable probability distribution-and it supports efficient algorithms, such as those that leverage model structure and those that scale to massive data.\nThere has been some prior research on efficient algorithms in Turing-complete languages. Venture and Anglican design inference as a collection of local inference problems, defined over program fragments (Mansinghka et al., 2014; Wood et al., 2014). This produces fast program-specific infer ence code, which we build on. Neither system supports inference methods such as programmable posterior approximations, inference models, or data subsampling. Concurrent with our work. WebPPL features amortized inference (Ritchie et al., 2016). Unlike Edward, WebPPL does noi reuse the model's representation; rather, it annotates the original program and leverages helper func tions, which is a less flexible strategy. Finally, inference is designed as program transformations in Kiselyov & Shan (2009); Scibior et al. (2015); Zinkov & Shan (2016). This enables the flexibility of composing inference inside other probabilistic programs. Edward builds on this idea to compose not only inference within modeling but also modeling within inference (e.g., variational models)."}, {"section_index": "3", "section_name": "S COMPOSITIONAL REPRESENTATIONS FOR PROBABILISTIC MODELS", "section_text": "We first develop compositional representations for probabilistic models. We desire two criteria: (a). integration with computational graphs, an efficient framework where nodes represent operations on data and edges represent data communicated between them (Culler, 1986); and (b) invariance of the. representation under the graph, that is, the representation can be reused during inference.\nEdward defines random variables as the key compositional representation. They are class objects with methods, for example, to compute the log density and to sample. Further, each random variable x is associated to a tensor (multi-dimensional array) x*, which represents a single sample x* ~ p(x). This association embeds the random variable onto a computational graph on tensors.\nAs an illustration, we use a Beta-Bernoulli model, p(x, 0) = Beta(0 |1, 1) IIn=1 Bernoulli(xn | 0) where 0 is a latent probability shared across the 50 data points x E {0, 1}50. The random variable is 50-dimensional, parameterized by the random tensor 0*. Fetching the object x runs the graph: i simulates from the generative process and outputs a binary vector of 50 elements.\ntf.ones(50) theta = Beta(a=1.0, b=1.0) 2 x = Bernoulli(p=tf.ones(50) * theta) O* X * X\nAll computation is registered symbolically on random variables and not over their execution. Sym bolic representations do not require reifying the full model, which leads to unreasonable memory\nThe design's simplicity makes it easy to develop probabilistic programs in a computational graph framework. Importantly, all computation is represented on the graph. This enables one to compose random variables with complex deterministic structure such as deep neural networks, a diverse set of math operations, and third party libraries that build on the same framework. The design also enables compositions of random variables to capture complex stochastic structure.\n1 # Probabilistic model 2 z = Normal(mu=tf.zeros([N, d]), sigma=tf.ones([N, d])) 3 h = Dense(256, activation=' relu') (z) 4 x = Bernoulli(loqits=Dense(28 * 28, activation=None) (h)) 5 6 # Variational model 7 qx = tf.placeholder(tf.float32, [N, 28 * 28]) Xn 8 qh = Dense(256, activation=' relu') (qx) 9 qz = Normal (mu=Dense(d, activation=None) (qh), 10 sigma=Dense(d, activation=' softplus ) (qh) ) N\n1 # Probabilistic model 2 z = Normal(mu=tf.zeros([N, d]), sigma=tf.ones([N, d])) 3 h = Dense(256, activation=' relu') (z) 4 x = Bernoulli(logits=Dense(28 * 28, activation=None) (h)) 5 6 # Variational model 7 qx = tf.placeholder(tf.float32, [N, 28 * 28]) Xn 8 qh = Dense(256, activation='relu') (qx) 9 qz = Normal (mu=Dense(d, activation=None) (qh) , 10 sigma=Dense(d, activation= softplus ) (qh)) N\nFigure 2: Variational auto-encoder for a data set of 28 28 pixel images: (left) graphical model, with dotted lines for the inference model; (right) probabilistic program, with 2-layer neural networks\nconsumption for large models (Tristan et al., 2014). Moreover, it enables us to simplify both de terministic and stochastic operations in the graph, before executing any code (Scibior et al., 2015 Zinkov & Shan, 2016).\nWith computational graphs, it is also natural to build mutable states within the probabilistic program.. As a typical use of computational graphs, such states can define model parameters; in TensorFlow, this is given by a tf.Variable. Another use case is for building discriminative models p(y |x). where x are features that are input as training or test data. The program can be written independent of the data, using a mutable state (tf.placeholder) for x in its graph. During training and testing we feed the placeholder the appropriate values.\nIn Appendix A, we provide examples of a Bayesian neural network for classification (A.1), latent Dirichlet allocation (A.2), and Gaussian matrix factorization (A.3). We present others below.\nFigure 2 implements a variational auto-encoder (vAE) (Kingma & Welling, 2014; Rezende et al. 2014) in Edward. It comprises a probabilistic model over data and a variational model designed tc approximate the former's posterior. Here we use random variables to construct both the probabilistic model and the variational model; they are fit during inference (more details in Section 4).\nThere are N data points xn E {0, 1}28-28 each with d latent variables, zn E Rd. The program. uses Keras (Chollet, 2015) to define neural networks. The probabilistic model is parameterized by a 2-layer neural network, with 256 hidden units (and ReLU activation), and generates 28 28 pixel. images. The variational model is parameterized by a 2-layer inference network, with 256 hidden. units and outputs parameters of a normal posterior approximation..\nThe probabilistic program is concise. Core elements of the VAE---such as its distributional assump-. tions and neural net architectures-are all extensible. With model compositionality, we can embed it into more complicated models (Gregor et al., 2015; Rezende et al., 2016) and for other learning. tasks (Kingma et al., 2014). With inference compositionality (which we discuss in Section 4), we can embed it into more complicated algorithms, such as with expressive variational approximations (Rezende & Mohamed, 2015; Tran et al., 2016b; Kingma et al., 2016) and alternative objectives (Ranganath et al., 2016a; Li & Turner, 2016; Dieng et al., 2016).."}, {"section_index": "4", "section_name": "3.2 EXAMPLE: BAYESIAN RECURRENT NEURAL NETWORK WITH VARIABLE LENGTH", "section_text": "Random variables can also be composed with control flow operations. As an example, Figure 3 implements a Bayesian recurrent neural network (RNN) with variable length. The data is a sequence of inputs {x1,..., XT} and outputs {y1,..., yT} of length T with xt E RP and yt E R per time. step. For t = 1, ..., T, a RNN applies the update\nwhere the previous hidden state is ht-1 E RH. We feed each hidden state into the output's like lihood, yt ~ Normal(Wyht + by, 1), and we place a standard normal prior over all parameters {Wn E RHxH,Wx E RDxH,Wy E RHx1,bn E RH,by E R}. Our implementation is dy- namic: it differs from a RNN with fixed length, which pads and unrolls the computation.\nW h 1 def rnn cell(hprev, xt) : 2 return tf.tanh(tf.dot(hprev, Wh) + tf.dot(xt, Wx) + bh) 3 W Xt 4 Wh = Normal(mu=tf.zeros([H, H]), sigma=tf.ones([H, H])) 5 Wx = Normal(mu=tf.zeros([D, H]), sigma=tf.ones([D, H])) 6 Wy = Normal(mu=tf.zeros([H, 1]), sigma=tf.ones([H, 1])) bh 7 bh = Normal (mu=tf.zeros(H), sigma=tf.ones(H) ) 8 by = Normal (mu=tf.zeros(1), sigma=tf.ones(1)) 9 W ht 10 x = tf.placeholder(tf.float32, [None, D]). 11 h = tf.scan(rnn_cell, x, initializer-tf.zeros(H)). 12 y = Normal(mu=tf.matmul(h, Wy) + by, sigma=1.0).\nFigure 3: Bayesian RNn: (left) graphical model; (right) probabilistic program. The program ha an unspecified number of time steps; it uses a symbolic for loop (t f . scan)..\n3.3 STOCHASTIC CONTROL FLOW AND MODEL PARALLELISM\na * tf.while_loop(...) a p +* p X\na * tf.while loop(... a P X * p X\nFigure 4: Computational graph for a probabilistic program with stochastic control flov\nRandom variables can also be placed in the control flow itself, enabling probabilistic programs with. stochastic control flow. Stochastic control flow defines dynamic conditional dependencies, known in the literature as contingent or existential dependencies (Mansinghka et al., 2014; Wu et al., 2016). See Figure 4, where x may or may not depend on a for a given execution. In Appendix A.4, we. use stochastic control flow to implement a Dirichlet process mixture model. Tensors with stochastic. shape are also possible: for example, tf. zeros (Poisson (lam=5. 0) ) defines a vector of zeros with. length given by a Poisson draw with rate 5.0..\nStochastic control flow produces difficulties for algorithms that use the graph structure because the relationship of conditional dependencies changes across execution traces. The computational graph however, provides an elegant way of teasing out static conditional dependence structure (p) from dynamic dependence structure (a). We can perform model parallelism (parallel computation across components of the model) over the static structure with GPUs and batch training. We can use more generic computations to handle the dynamic structure."}, {"section_index": "5", "section_name": "COMPOSITIONAL REPRESENTATIONS FOR INFERENCE", "section_text": "We described random variables as a representation for building rich probabilistic programs over computational graphs. We now describe a compositional representation for inference. We desire two criteria: (a) support for many classes of inference, where the form of the inferred posterior depends on the algorithm; and (b) invariance of inference under the computational graph, that is, the posterior can be further composed as part of another model.\nTo explain our approach, we will use a simple hierarchical model as a running example. Figure 5 displays a joint distribution p(x, z, ) of data x, local variables z, and global variables . The ideas here extend to more expressive programs.\nThe goal of inference is to calculate the posterior distribution p(z, xtrain; 0) given data xtrain, where 0 are any model parameters that we will compute point estimates for.2 We formalize this as\nW h 1 def rnn_cell(hprev, xt) : 2 return tf.tanh(tf.dot(hprev, Wh) + tf.dot(xt, Wx) + bh) 3 W Xt 4 Wh = Normal(mu=tf.zeros([H, H]), sigma=tf.ones([H, H])) 5 Wx = Normal(mu=tf.zeros([D, H]), sigma=tf.ones([D, H])) 6 Wy = Normal (mu=tf.zeros([H, 1]), sigma=tf.ones([H, 1])) bh 7 bh = Normal (mu=tf.zeros(H), sigma-tf.ones(H)) 8 by = Normal (mu=tf.zeros(1), sigma=tf.ones(1)) 9 10 x = tf.placeholder(tf.float32, [None, D]) 11 h = tf.scan(rnn_cell, x, initializer-tf.zeros(H)) 12 y = Normal(mu=tf.matmul(h, Wy) + by, sigma=1.0)\n2For example, we couldreplace x's sigma argument with tf. exp(tf.Variable(0.0)) *tf. ones ([N, D]) This defines a model parameter initialized at O and positive-constrained..\n1 N = 10o00 # number of data points 2 D = 2 # data dimension 3 K = 5 # number of clusters 4 5 beta = Normal(mu=tf.zeros([K, D]), sigma=tf.ones([K, D Xn 6 z = Categorical(logits=tf.zeros([N, K])) 7 x = Normal(mu=tf.gather(beta, z), sigma=tf.ones([N, D] N\nFigure 5: Hierarchical model: (left) graphical model; (right) probabilistic program. It is a mixture of Gaussians over D-dimensional data{xn} E RND. There are K latent cluster means . D K X D\nmin (p(z,xtrain;0), q(z,B;X)) a,0\nThe choice of approximation q, loss L, and rules to update parameters {0, X} are specified by an in ference algorithm. (Note q can be nonparametric, such as a point or a collection of samples.)\nThe idea is that Inference defines and solves the optimization in Equation 1. It adjusts parameters of the distribution of qbeta and qz (and any model parameters) to be close to the posterior..\nClass methods are available to finely control the inference. Calling inference. initialize () build a computational graph to update {0, }. Calling inference.update() runs this computation onc to update 0, X}; we call the method in a loop until convergence. Importantly, no efficiency is los in Edward's language: the computational graph is the same as if it were handwritten for a specifi model. This means the runtime is the same; also see our experiments in Section 5.2.\nA key concept in Edward is that there is no distinct \"model' or \"inference\"' block. A model is simply a collection of random variables, and inference is a way of modifying parameters in that collection subject to another. This reductionism offers significant flexibility. For example, we can infer only parts of a model (e.g., layer-wise training (Hinton et al., 2006)), infer parts used in multiple models (e.g., multi-task learning), or plug in a posterior into a new model (e.g., Bayesian updating)."}, {"section_index": "6", "section_name": "4.2 CLASSES OF INFERENCE", "section_text": "The design of Inference is very general. We describe subclasses to represent many algorithms below: variational inference, Monte Carlo, and generative adversarial networks.\nSpecific variational algorithms inherit from the VariationalInference class. Each defines its own methods, such as a loss function and gradient. For example, we represent maximum a posterior (MAP) estimation with an approximating family (qbeta and qz) of PointMass random variables, i.e. with all probability mass concentrated at a point. mAp inherits from VariationalInference and de- fines the negative log joint density as the loss function; it uses existing optimizers inside TensorFlow. In Section 5.1, we experiment with multiple gradient estimators for black box variational inference (Ranganath et al., 2014). Each estimator implements the same loss (an objective proportional to the divergence KL(q p)) and a different update rule (stochastic gradient).\nwhere q(z, ; ) is an approximation to the posterior p(z, | xtrain; 0), and is a loss function with respect to p and q.\nVariational inference posits a family of approximating distributions and finds the closest member in. the family to the posterior (Jordan et al., 1999). In Edward, we build the variational family in the graph; see Figure 6 (left). For our running example, the family has mutable variables as parameters X = {, , }, where q(; , ) = Normal(; , ) and q(z; ) = Categorical(z; )..\n1 qbeta = Normal( 1 T = 10000 # number of samples 2 mu=tf.Variable(tf.zeros([K, D])), 2 qbeta = Empirical( 3 sigma=tf.exp(tf.Variable(tf.zeros([K, D])))) 3 params-tf.Variable(tf.zeros([T, K, D]))) 4 qz = Categorical( 4 qz = Empirical( 5 logits=tf.Variable(tf.zeros([N, K]))) 5 params=tf.Variable(tf.zeros([T, N]))) 6 6 7 inference = ed.VariationalInference( 7 inference = ed.MonteCarlo( 8 {beta: qbeta, z: qz}, data={x: x train}) 8 {beta: qbeta, z: qz}, data={x: x train})\nqbeta = Normal( 1 T = 10000 #numb mu=tf.Variable(tf.zeros([K, D])), 2 qbeta = Empirical sigma=tf.exp(tf.Variable(tf.zeros([K, D])))) 3 params=tf.Varia qz = Categorical( 4 qz = Empirical( logits=tf.Variable(tf.zeros([N, K]))). 5 params=tf.Varia 6 inference = ed.VariationalInference(. 7 inference = ed.Mo {beta: qbeta, z: qz}, data={x: x_train}). 8 {beta: qbeta,\nFigure 6: (left) Variational inference. (right) Monte Carlo\nFigure 7: Generative adversarial networks: (left) graphical model; (right) probabilistic program The model (generator) uses a parameterized function (discriminator) for training\nMonte Carlo approximates the posterior using samples (Robert & Casella, 1999). Monte Carlo is an inference where the approximating family is an empirical distribution, q(;(3(t)}) T=1(,3(t)) and q(z;{z(t)}) =T=1(z,z(t)). The parameters are X ={(t),z(t)} See Figure 6 (right). Monte Carlo algorithms proceed by updating one sample (t), z(t) at a time in the empirical approximation. Specific MC samplers determine the update rules: they can use gra dients such as in Hamiltonian Monte Carlo (Neal, 2011) and graph structure such as in sequential Monte Carlo (Doucet et al., 2001).\nEdward also supports non-Bayesian methods such as generative adversarial networks (GANs) (Good- fellow et al., 2014). See Figure 7. The model posits random noise eps over N data points, each with d dimensions; this random noise feeds into a generative_network function, a neural network that outputs real-valued data x. In addition, there is a di scriminat ive_network which takes data as input and outputs the probability that the data is real (in logit parameterization). We build gAnInference, running it optimizes parameters inside the two neural network functions. This approach extends to. many advances in GANs (e.g., Denton et al. (2015); Li et al. (2015)).\nFinally, one can design algorithms that would otherwise require tedious algebraic manipulation With symbolic algebra on nodes of the computational graph, we can uncover conjugacy relationships between random variables. Users can then integrate out variables to automatically derive classica Gibbs (Gelfand & Smith, 1990), mean-field updates (Bishop, 2006), and exact inference. These algorithms are being currently developed in Edward."}, {"section_index": "7", "section_name": "4.3 COMPOSING INFERENCES", "section_text": "Core to Edward's design is that inference can be written as a collection of separate inference pro grams. Below we demonstrate variational EM, with an (approximate) E-step over local variables and an M-step over global variables. We instantiate two algorithms, each of which conditions on inferences from the other, and we alternate with one update of each (Neal & Hinton, 1993),\nqbeta = PointMass(params=tf.Variable(tf.zeros([K, D]))) qz = Categorical(logits=tf.Variable(tf.zeros([N, K]))) inference_e = ed.VariationalInference({z: qz}, data={x: x_train, beta: qbeta}) inference m = ed.MAP({beta: qbeta}, data={x: x_train, z: qz}).\n1 def generative network(eps) :. 2 h = Dense(256, activation='relu') (eps). 3 return Dense(28 * 28, activation=None) (h). 4 5 def discriminative network(x) :. 6 h = Dense(28 * 28, activation='relu') (x). 7 return Dense(1, activation=None) (h). 9 # Probabilistic model 10 eps = Normal(mu=tf.zeros([N, d]), sigma=tf.ones([N,] Xn 11 x = generative_network(eps) N 12 13 inference = ed.GANInference(data={x: x train}, 14 discriminator=discriminative network).\n1 def generative network(eps) :. 2 h = Dense(256, activation='relu') (eps). 3 return Dense(28 * 28, activation=None) (h) 4 En 5 def discriminative network(x) :. A 6 h = Dense(28 * 28, activation= relu) (x) 7 return Dense(1, activation=None) (h). 8 9 # Probabilistic model 10 eps = Normal(mu=tf.zeros([N, d]), sigma-tf.ones([N, d])) Xn 11 x = generative_network(eps). 12 N 13 inference = ed.GANInference(data={x: x train},. 14 disC 4T7\nThis extends to many other cases such as exact EM for exponential families, contrastive divergence. (Hinton, 2002), pseudo-marginal methods (Andrieu & Roberts, 2009), and Gibbs sampling withir variational inference (Wang & Blei, 2012; Hoffman & Blei, 2015). We can also write message pass-. ing algorithms, which solve a collection of local inference problems (Koller & Friedman, 2009). For. example, classical message passing uses exact local inference and expectation propagation locally. minimizes the Kullback-Leibler divergence, KL(p q) (Minka, 2001)."}, {"section_index": "8", "section_name": "4.4 DATA SUBSAMPLING", "section_text": "Stochastic optimization (Bottou, 2010) scales inference to massive data and is key to algorithms such as stochastic gradient Langevin dynamics (Welling & Teh, 2011) and stochastic variational inference (Hoffman et al., 2013). The idea is to cheaply estimate the model's log joint density in an. unbiased way. At each step, one subsamples a data set {xm} of size M and then scales densities. with respect to local variables,.\nN logp(x,z,) = logp() + > logp(xn|zn,) + logp(zn| n=1 M N log ogp(xm|zm,) +logp(Zm|P M m=1\nTo support stochastic optimization, we represent only a subgraph of the full model. This prevents reifying the full model, which can lead to unreasonable memory consumption (Tristan et al., 2014) During initialization, we pass in a dictionary to properly scale the arguments. See Figure 8\nFigure 8: Data subsampling with a hierarchical model. We define a subgraph of the full model forming a plate of size M rather than N. We then scale all local random variables by N/M..\nConceptually, the scale argument represents scaling for each random variable's plate, as if we had seen that random variable N/M as many times. As an example, Appendix B shows how to imple- ment stochastic variational inference in Edward. The approach extends naturally to streaming data (Doucet et al., 2000; Broderick et al., 2013; McInerney et al., 2015), dynamic batch sizes, and data structures in which working on a subgraph does not immediately apply (Binder et al., 1997; Johnson & Willsky, 2014; Foti et al., 2014).\nWe demonstrate Edward's flexibility for experimenting with complex inference algorithms. We consider the vAE setup from Figure 2 and the binarized MNIST data set (Salakhutdinov & Murray\n1 beta = Normal(mu=tf.zeros([K, D]), sigma=tf.ones([K, D])) 2 z = Categorical(logits=tf.zeros([M, K])) 3 x = Normal(mu=tf.gather(beta, z), sigma=tf.ones([M, D])) 4 5 qbeta = Normal (mu=tf.Variable(tf.zeros([K, D])), 6 siqma=tf.nn.softplus(tf.Variable(tf.zeros([K, D])))) 7 qz = Categorical(logits=tf.Variable(tf.zeros([M, D]))) 8 M 9 inference = ed.VariationalInference({beta: qbeta, z: qz}, data={x: x 10 inference.initialize(scale={x: float(N)/M, z: float(N)/M})\n1 beta = Normal(mu=tf.zeros([K, D]), sigma=tf.ones([K, D])) 2 z = Categorical(logits=tf.zeros([M, K])) 3 x = Normal(mu=tf.gather(beta, z), sigma=tf.ones([M, D])) 4 5 qbeta = Normal (mu=tf.Variable(tf.zeros([K, D])), 6 sigma=tf.nn.softplus(tf.Variable(tf.zeros([K, D])))) 7 qz = Categorical(logits=tf.Variable(tf.zeros([M, D]))) 8 9 inference = ed.VariationalInference({beta: qbeta, z: qz}, data={x: x_batch M 10 inference.initialize(scale={x: float(N)/M, z: float(N)/m})\nIn this section. we illustrate two main benefits of Edward: flexibility and efficiency. For the former we show how it is easy to compare different inference algorithms on the same model. For the latter we show how it is easy to get significant speedups by exploiting computational graphs\nTable 1: Inference methods for a probabilistic decoder on binarized MNIST. The Edward pPL is a convenient research platform, making it easy to both develop and experiment with many algorithms\n2008). We use d = 50 latent variables per data point and optimize using ADAM. We study differen components of the vAE setup using different methods; Appendix C.1 is a complete script. Afte training we evaluate held-out log likelihoods, which are lower bounds on the true value.\nTable 1 shows the results. The first method uses the vAE from Figure 2. The next three methods use the same vAE but apply different gradient estimators: reparameterization gradient without ar analytic KL; reparameterization gradient with an analytic entropy; and the score function gradien (Paisley et al., 2012; Ranganath et al., 2014). This typically leads to the same optima but at differen convergence rates. The score function gradient was slowest. Gradients with an analytic entropy produced difficulties around convergence: we switched to stochastic estimates of the entropy as it approached an optima. We also use hierarchical variational models (Hvms) (Ranganath et al. 2016b) with a normalizing flow prior; it produced similar results as a normalizing flow on the laten variable space (Rezende & Mohamed, 2015), and better than importance-weighted auto-encoders (1wAEs) (Burda et al., 2016).\nWe also study novel combinations, such as HvMs with the IwAE objective, GAN-based optimizatior. on the decoder (with pixel intensity-valued data), and Renyi divergence on the decoder. GAN-base optimization does not enable calculation of the log-likelihood; Renyi divergence does not directly. optimize for log-likelihood so it does not perform well. The key point is that Edward is a convenien. research platform: they are all easy modifications of a given script..\n5.2 GPU-ACCELERATED HAMILTONIAN MONTE CARLO\n1 # Mode1 2 x = tf.Variable(x data, trainable=False) 3 beta = Normal (mu=tf.zeros(D), sigma=tf.ones(D) ) 4 y = Bernoulli(logits=tf.dot(x, beta)) 5 6 # Inference Xn 7 qbeta = Empirical(params=tf.Variable(tf.zeros([T, D]))) N 8 inference = ed.HMc({beta: qbeta}, data={y: y_data}) 9 inference.run(step_size-0.5 / N, n_steps=10)"}, {"section_index": "9", "section_name": "Figure 9: Edward program for Bayesian logistic regression with Hamiltonian Monte Carlo (Hmc", "section_text": "We benchmark runtimes for a fixed number of Hamiltonian Monte Carlo (Hmc; Neal, 2011) iter-. ations on modern hardware: a 12-core Intel i7-5930K CPU at 3.50GHz and an NVIDIA Titan X (Maxwell) GPU. We apply logistic regression on the Covertype dataset (N = 581012, D = 54; responses were binarized) using Edward, Stan (with PyStan) (Carpenter et al., 2016), and PyMC3 (Salvatier et al., 2015). We ran 100 Hmc iterations, with 10 leapfrog updates per iteration, a step. size of 0.5/N, and single precision. Figure 9 illustrates the program in Edward..\nTable 2 displays the runtimes.3 Edward (GPU) features a dramatic 35x speedup over Stan (1 CPU) and 6x speedup over PyMC3 (12 CPU). This showcases the value of building a PPL on top of com\n3In a previous version of this paper, we reported PyMC3 took 361s. This was caused by a bug preventing PyMC3 from correctly handling single-precision floating point. (PyMC3 with double precision is roughly 14x\nInference method Negative log-likelihood. VAE (Kingma & Welling, 2014) < 88.2 VAE without analytic KL. < 89.4 VAE with analytic entropy. < 88.1 VAE with score function gradient. 87.9 Normalizing flows (Rezende & Mohamed, 2015). 85.8 Hierarchical variational model (Ranganath et al., 2016b). 85.4 Importance-weighted auto-encoders (K = 50) (Burda et al., 2016) 86.3 HVM with IwAE objective (K = 5) < 85.2 Renyi divergence (a = 1) (Li & Turner, 2016) < 140.5\nTable 2: Hmc benchmark for large-scale logistic regression. Edward (GPU) is significantly faste than other systems. In addition. Edward has no overhead: it is as fast as handwritten TensorFlow\nputational graphs. The speedup stems from fast matrix multiplication when calculating the model's og-likelihood; GPUs can efficiently parallelize this computation. We expect similar speedups foi models whose bottleneck is also matrix multiplication, such as deep neural networks."}, {"section_index": "10", "section_name": "5.3 PROBABILITY ZOO", "section_text": "We described Edward, a Turing-complete pPL with compositional representations for probabilisti models and inference. Edward expands the scope of probabilistic programming to be as flexibl and computationally efficient as traditional deep learning. For flexibility, we showed how Edwar can use a variety of composable inference methods, capture recent advances in variational inferenc and generative adversarial networks, and finely control the inference algorithms. For efficiency, w showed how Edward leverages computational graphs to achieve fast, parallelizable computation scales to massive data, and incurs no runtime overhead over handwritten code.\nIn present work, we are applying Edward as a research platform for developing new probabilisti models (Rudolph et al., 2016; Tran et al., 2017) and new inference algorithms (Dieng et al., 2016) As with any language design, Edward makes tradeoffs in pursuit of its flexibility and speed for re. search. For example, an open challenge in Edward is to better facilitate programs with comple. control flow and recursion. While possible to represent, it is unknown how to enable their flexible. inference strategies. In addition, it is open how to expand Edward's design to dynamic computa. tional graph frameworks---which provide more flexibility in their programming paradigm--but may sacrifice performance. A crucial next step for probabilistic programming is to leverage dynami computational graphs while maintaining the flexibility and efficiency that Edward offers..\nslower than Edward (GPU).) This has been fixed after discussion with Thomas Wiecki. The reported numbe also exclude compilation time, which is significant for Stan.\n4The Probability Zoo is available at ht tp : / /edwardl ib. org/ zoo. It includes model parameters and inferred posterior factors, such as local and global variables during training and any inference networks\nThere are various reasons for the speedup. Stan only used 1 CPU as it leverages multiple cores by running Hmc chains in parallel. Stan also used double-precision floating point as it does not allow single-precision. For PyMC3, we note Edward's speedup is not a result of PyMC3's Theano backend compared to Edward's TensorFlow. Rather, PyMC3 does not use Theano for all its computation, so it experiences communication overhead with NumPy. (PyMC3 was actually slower when using the GPU.) We predict that porting Edward's design to Theano would feature similar speedups.\nIn addition to these speedups, we highlight that Edward has no runtime overhead: it is as fast as handwritten TensorFlow. Following Section 4.1, this is because the computational graphs for infer- ence are in fact the same for Edward and the handwritten code..\nIn addition to Edward, we also release the Probability Zoo, a community repository for pre-trained probability models and their posteriors.4 It is inspired by the model zoo in Caffe (Jia et al., 2014), which provides many pre-trained discriminative neural networks, and which has been key to making large-scale deep learning more transparent and accessible. It is also inspired by Forest (Stuhlmuller, 2012), which provides examples of probabilistic programs."}, {"section_index": "11", "section_name": "ACKNOWLEDGEMENTS", "section_text": "We thank the probabilistic programming community-for sharing our enthusiasm and motivating further work--including developers of Church, Venture, Gamalon, Hakaru, and WebPPL. We alsc thank Stan developers for providing extensive feedback as we developed the language, as well as Thomas Wiecki for experimental details. We thank the Google BayesFlow team--Joshua Dillon, Iar Langmore, Ryan Sepassi, and Srinivas Vasudevan---as well as Amr Ahmed, Matthew Johnson, Hung Bui, Rajesh Ranganath, Maja Rudolph, and Francisco Ruiz for their helpful feedback. This work is supported by NSF IIS-1247664, ONR N00014-11-1-0651, DARPA FA8750-14-2-0009, DARPA N66001-15-C-4032, Adobe, Google, NSERC PGS-D, and the Sloan Foundation."}, {"section_index": "12", "section_name": "REFERENCES", "section_text": "Martin Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg. Rajat Monga, Sherry Moore, Derek G Murray, Benoit Steiner, Paul Tucker, Vijay Vasudevan.. Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zhang. TensorFlow: A system for large. scale machine learning. arXiv preprint arXiv:1605.08695, 2016.\nJohn Binder, Kevin Murphy, and Stuart Russell. Space-efficient inference in dynamic probabilistic networks. In International Joint Conference on Artificial Intelligence. 1997.\nChristopher M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006\nLeon Bottou. Large-scale machine learning with stochastic gradient descent. In Proceedings o COMPSTAT'2010, pp. 177-186. Springer, 2010.\nYuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. In Inter. national Conference on Learning Representations, 2016.\nFrancois Chollet. Keras. https: //github. com/fchollet/keras, 2015.\nDavid E Culler. Dataflow architectures. Technical report, DTIC Document, 1986.\nPeter Dayan, Geoffrey E Hinton, Radford M Neal, and Richard S Zemel. The Helmholtz machine Neural computation, 7(5):889-904, 1995\nArnaud Doucet, Simon Godsill, and Christophe Andrieu. On sequential Monte Carlo sampling. methods for Bayesian filtering. Statistics and Computing. 10(3):197-208. 2000\nAlan E Gelfand and Adrian FM Smith. Sampling-based approaches to calculating marginal densi ties. Journal of the American statistical association, 85(410):398-409, 1990.\nTamara Broderick, Nicholas Boyd, Andre Wibisono, Ashia C Wilson, and Michael I Jordan. Stream ing Variational Baves. In Neural Information Processing Systems. 2013\nEmily L Denton, Soumith Chintala, Rob Fergus, et al. Deep generative image models using a Laplacian pvramid of a InNouralntorwation no Svstems2015\nMatthew D Hoffman, David M Blei, Chong Wang, and John Paisley. Stochastic variational infer ence. The Journal of Machine Learning Research. 14(1):1303-1347. 2013.\nM. I. Jordan. Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to variational methods for graphical models. Machine Learning, 37(2):183-233, 1999.\nBrenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, and Samuel J Gershman. Building machines that learn and think like people. arXiv preprint arXiv:1604.00289, 2016.\nDiederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervisec. learning with deep generative models. In Neural Information Processing Systems, 2014\nDiederik P Kingma. Tim Salimans, and Max Welling. Improving variational inference with inverse llforeoressivefowInA vstems. 2016\nThomas P Minka. Expectation propagation for approximate Bayesian inference. In Uncertainty ir Artificial Intelligence, 2001.\nJohn Paisley. David M. Blei. and Michael Jordan. Variational Bayesian inference with stochasti search. In International Conference on Machine Learning. 2012\nRajesh Ranganath, Dustin Tran, and David M Blei. Hierarchical variational models. In Internationa Conference on Machine Learning, 2016b\nChristian P Robert and George Casella. Monte Carlo Statistical Methods. Springer, 1999\nAdam Scibior, Zoubin Ghahramani, and Andrew D Gordon. Practical probabilistic programming with monads. In the 8th ACM SIGPLAN Symposium, pp. 165-176, New York, New York, USA 2015. ACM Press.\nRadford M. Neal and Geoffrey E. Hinton. A new view of the EM algorithm that justifies incremental and other variants. In Learning in Graphical Models, pp. 355-368. Kluwer Academic Publishers, 1993.\nDanilo J Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approx imate inference in deep generative models. In International Conference on Machine Learning 2014.\nDustin Tran, Rajesh Ranganath, and David M. Blei. The variational Gaussian process. In Internc tional Conference on Learning Representations, 2016b.\nFrank Wood, Jan Willem van de Meent, and Vikash Mansinghka. A new approach to probabilistic Drogramming inferenc InArtiteialntoli and Statistics. 2014\nRobert Zinkov and Chung-chieh Shan. Composing inference algorithms as program transformations arXiv preprint arXiv:1603.01882, 2016\nWo W 1 b0 b1 X n Yn N W_0 = Normal(mu=tf.zeros([D, H]), sigma=tf.ones([D, H])) W_1 = Normal(mu=tf.zeros([H, 1]), sigma=tf.ones([H, 1])) b 0 = Normal (mu=tf.zeros(H), sigma=tf.ones(H)) b 1 = Normal (mu=tf.zeros(1), sigma=tf.ones(1)) x = tf.placeholder(tf.float32, [N, D]) y = Bernoulli(logits-tf.matmul(tf.nn.tanh(tf.matmul(x, W_O) + b_0)\nFigure 10: Bayesian neural network for classification"}, {"section_index": "13", "section_name": "A MODEL EXAMPLES", "section_text": "There are many examples available at http: //edwardlib. org, including models, inference methods, and complete scripts. Below we describe several model examples; Appendix B describes an inference example (stochastic variational inference); Appendix C describes complete scripts. All examples in this paper are comprehensive, only leaving out import statements and fixed values. See the companion webpage for this paper (http://edwardlib.org/iclr2017) for examples in a machine-readable format with runnable code..\nA Bayesian neural network is a neural network with a prior distribution on its weights\nDefine the likelihood of an observation (x. ) with binary label yn. E {0, 1} as\np(Yn|Wo,bo,W1,b1 ; xn) Bernoulli(yn NN(xn ; Wo, bo, W1, bu\nwhere NN is a 2-layer neural network whose weights and biases form the latent variables Wo, bo, W1, b1. Define the prior on the weights and biases to be the standard normal. See Fig ure 10. There are N data points, D features, and H hidden units."}, {"section_index": "14", "section_name": "A.2 LATENT DIRICHLET ALLOCATION", "section_text": "See Figure 11. Note that the program is written for illustration. We recommend vectorization in practice: instead of storing scalar random variables in lists of lists, one should prefer to represent. few random variables, each which have many dimensions..\n1 D = 4 # number of documents 2 N = [11502, 213, 1523, 1351] # words per doc 3 K = 10 # number of topics 4 V = 100000 # vocabulary size K 5 6 theta = Dirichlet(alpha=tf.zeros([D, K]) + 0.1) 7 phi = Dirichlet(alpha=tf.zeros([K, V]) + 0.05). 8 z = [[O] * N] * D d,n Wd,n 9 w = [[O] * N] * D 10 for d in range(D) : 11 for n in range(N[d]): 12 z[d] [n] = Categorical(pi=theta[d, :]) 13 w[d] [n] = Categorical(pi=phi[z[d] [n], :])\nFigure 11: Latent Dirichlet allocation (Blei et al., 2003)\n1 D = 4 # number of documents 2 N = [11502, 213, 1523, 1351] # words per doc 3 K = 10 # number of topics 4 V = 100000 # vocabulary size 5 6 theta = Dirichlet(alpha=tf.zeros([D, K]) + 0.1) 7 phi = Dirichlet(alpha=tf.zeros([K, V]) + 0.05) 8 z = [[O] * N] * D 9 w = [[O] * N] * D d.n 10 for d in range(D) :. 11 for n in range(N[d]) :. 12 z[d] [n] = Categorical(pi=theta[d, :]). 13 w[d] [n] = Categorical(pi=phi[z[d] [n], :])\nFigure 12: Gaussian matrix factorization\nA.4 DIRICHLET PROCESS MIXTURE MODEI"}, {"section_index": "15", "section_name": "See Figure 13.", "section_text": "A Dirichlet process mixture model is written as follows.\n-C.OIIeSIN where mu has shape (N, D). The DirichletProcess random variable returns sample_n=N draw each with shape given by the base distribution Norma1 (mu, sigma). The essential component defir ing the DirichletProcess random variable is a stochastic while loop. We define it below. Se Edward's code base for a more involved version with a base distribution..\n2 def cond(k, beta_k) : 3 flip = Bernoulli(p=beta_k) 4 return tf.equal(flip, tf.constant(1)) 5 6 def body(k, beta_k) : 7 beta k = beta k * Beta(a=1.0, b=alpha) 8 return k + 1, beta_k 9 0 k = tf.constant(0) 1 beta k = Beta(a=1.0, b=alpha) 2 stick num, stick beta = tf.while loop(cond, body, loop_vars-[k, beta_k]) 3 return stick num"}, {"section_index": "16", "section_name": "B INFERENCE EXAMPLE: STOCHASTIC VARIATIONAL INFERENCE", "section_text": "In the subgraph setting, we do data subsampling while working with a subgraph of the full model This setting is necessary when the data and model do not fit in memory. It is scalable in that botl the algorithm's computational complexity (per iteration) and memory complexity are independen of the data set size.\nFor the code, we use the runnir. kample. a mixture model described in Figure 5\n1 N = 10 2 M = 10 3 K = 5 # latent dimension 4 5 U = Normal(mu=tf.zeros([M, K]), sigma=tf.ones([M, K])) 6 V = Normal(mu=tf.zeros([N, K]), sigma=tf.ones([N, K])) n,m 7 Y = Normal(mu=tf.matmul(U, V, transpose_b=True), sigma=tf.ones([N, M])) 1\nN p(x,z,3) = p(3) 1 p(zn B)p(xnzn, n=1\nTo avoid memory issues, we work on only a subgraph of the model\nM p(x,z,) =p( 1 p(ZmB)p(xmzm, m=1\n1 M = 128 # mini-batch size 2 3 beta = Normal(mu=tf.zeros([K, D]), sigma=tf.ones([K, D])) 4 z = Categorical(logits=tf.zeros([M, K])) 5 x = Normal(mu=tf.gather(beta, z), sigma=tf.ones([M, D]))\nparameterized by ().() gain, we work on only a subgraph of the model.\nqbeta = Normal(mu=tf.Variable(tf.zeros([K, D])), sigma=tf.nn.softplus(tf.Variable(tf.zeros[K, D]))) qz_variables = tf.Variable(tf.zeros([M, K])) qz = Categorical(logits=qz_variables)\nWe use kLqp, a variational method that minimizes the divergence measure KL(q p) (Jordan et al.. 1999). We instantiate two algorithms: a global inference over given the subset of z and a loca inference over the subset of z given . We also pass in a TensorFlow placeholder x_ph for the data so we can change the data at each step.\nx_ph = tf.placeholder(tf.float32, [M]) inference_global = ed.KLqp({beta: qbeta}, data-{x: x_ph, z: qz}) inference_local = ed.KLqp({z: qz}, data={x: x_ph, beta: qbeta})\nWe initialize the algorithms with the scale argument, so that computation on z and x will be scaled appropriately. This enables unbiased estimates for stochastic gradients..\nqz_init = tf.initialize_variables([qz_variables]) for - in range(10oo) : x batch = next batch(size=M) for - in range(10): # make local inferences inference local.update(feed_dict={x_ph: x_batch}) # update global parameters inference global.update(feed dict={x ph: x batch}) # reinitialize the local factors qz_init.run()\nAfter each iteration, we also reinitialize the parameters for q(z | 3); this is because we do inferenc. on a new set of local variational factors for each batch. This demo readily applies to other inference algorithms such as sgLD (stochastic gradient Langevin dynamics): simply replace qbeta and qz witl Empirical random variables; then call ed. sgLD instead of ed. KLqp..\nN qz,) = q(;X 1 q(zn B;Yn) n=1\nM qz,) =q(;X) q(ZmB;Ym) m=1\nparameterized by {, {/m}}. Importantly, only M parameters are stored in memory for {ym} rather than N.\nWe now run the algorithm, assuming there is a next_batch function which provides the next batch of data.\nNote that if the data and model fit in memory but you'd still like to perform data subsampling for fast inference, we recommend not defining subgraphs. You can reify the full model, and simply index the local variables with a placeholder. The placeholder is fed at runtime to determine which of the. local variables to update at a time. (For more details, see the website's API.).\nSee Figure 14.\nFigure 14: Complete script for a VAE (Kingma & Welling, 2014) with batch training. It generates MNIST digits after every 1000 updates"}, {"section_index": "17", "section_name": "C.2 PROBABILISTIC MODEL FOR WORD EMBEDDINGS", "section_text": "See Figure 15. This example uses data subsampling (Section 4.4). The priors and conditiona likelihoods are defined only for a minibatch of data. Similarly the variational model only model the embeddings used in a given minibatch. TensorFlow variables contain the embedding vectors fo the entire vocabulary. TensorFlow placeholders ensure that the correct embedding vectors are use as variational parameters for a given minibatch.\nThe Bernoulli variables y_pos and y_neg are fixed to be 1's and O's respectively. They model whether a word is indeed the target word for a given context window or has been drawn as a neg. ative sample. Without regularization (via priors), the objective we optimize is identical to negative sampling.\nimport edward as ed 1 2 import tensorflow as tf 3 4 from edward.models import Bernoulli, Normal, PointMass S 6 N = 581238 # number of total words M = 128 # batch size during training 8 K = 100 # number of factors 9 ns = 3 # number of negative samples cs = 4 # context size 1 L = 50000 # vocabulary size 2 3 # Prior over embedding vectors p_rho = Normal(mu=tf.zeros([M, K]), S sigma=tf.sqrt(N) * tf.ones([M, K])) 6 n_rho = Normal(mu=tf.zeros([M, ns, K]), 7 sigma=tf.sqrt(N) * tf.ones([M, ns, K])) 8 9 # Prior over context vectors 0 ctx_alphas = Normal(mu=tf.zeros([M, cs, K]), 1 sigma=tf.sqrt(N)*tf.ones([M, cs, K])) 2 3 # Conditional likelihoods 4 ctx_sum = tf.reduce_sum(ctx_alphas, [1]) 5 p_eta = tf.expand_dims(tf.reduce_sum(p_rho * ctx_sum, -1),1) 6 n_eta = tf.reduce_sum(n_rho * tf.tile(tf.expand_dims(ctx_sum, 1), [1, ns, 1]), 7 y_pos = Bernoulli(logits=p_eta) 8 y_neg = Bernoulli(logits=n_eta) 9 0 # placeholders for batch training 1 p_idx = tf.placeholder(tf.int32, [M, 1]) 2 n_idx = tf.placeholder(tf.int32, [M, ns]) 3 ctx_idx = tf.placeholder(tf.int32, [M, cs]) 4 5 # Variational parameters (embedding vectors) 6 rho_params = tf.Variable(tf.random_normal([L, K])) 7 alpha_params = tf.Variable(tf.random_normal([L, K])) 8 9 # Variational distribution on embedding vectors 0 q p_rho = PointMass(params-tf.squeeze(tf.gather(rho_params, p_idx))) q n_rho = PointMass(params=tf.gather(rho_params, n_idx)) 2 q_alpha = PointMass(params-tf.gather(alpha_params, ctx_idx)) 3 4 inference = ed.MAP( S {p_rho: q p_rho, n_rho: q n_rho, ctx_alphas: q_alpha}, 9 data={y_pos: tf.ones((M, 1)), y_neg: tf.zeros((M, ns))}) 7 inference.initialize() 9 tf.initialize all_variables().run() 0 1 for _ in range(inference.n_iter) : 2 targets, windows, negatives = next_batch(M) # a function to generate data. 3 info_dict = inference.update(feed_dict={p_idx: targets, ctx_idx: windows, n_i inference.print_progress(info_dict)\nFigure 15: Exponential family embedding for binary data (Rudolph et al., 2016). Here, MAP is used to maximize the total sum of conditional log-likelihoods and log-priors.."}] |
rkFd2P5gl | [{"section_index": "0", "section_name": "LEVERAGING ASYNCHRONICITY IN GRADIENT DESCENT FOR SCALABLE DEEP LEARNING", "section_text": "Jeff Daily, Abhinav Vishnu, Charles Siegel\njeff.daily, abhinav.vishnu, charles.siegel}@pnnl.gov\nIn this paper, we present multiple approaches for improving the performance of gradient descent when utilizing mutiple compute resources. The proposed ap. proaches span a solution space ranging from equivalence to running on a single compute device to delaying gradient updates a fixed number of times. We present. a new approach, asynchronous layer-wise gradient descent that maximizes overlap. of layer-wise backpropagation (computation) with gradient synchronization (com-. munication). This approach provides maximal theoretical equivalence to the de. facto gradient descent algorithm, requires limited asynchronicity across multiple iterations of gradient descent, theoretically improves overall speedup, while mini- mizing the additional space requirements for asynchronicity. We implement all of our proposed approaches using Caffe - a high performance Deep Learning library. - and evaluate it on both an Intel Sandy Bridge cluster connected with Infini- Band as well as an NVIDIA DGX-1 connected with NVLink. The evaluations are. performed on a set of well known workloads including AlexNet and GoogleNet on the ImageNet dataset. Our evaluation of these neural network topologies in. dicates asynchronous gradient descent has a speedup of up to 1.7x compared to. synchronous."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Deep Learning (DL) algorithms are a class of Machine Learning and Data Mining (MLDM) algo rithms, which use an inter-connection of neurons and synapses to emulate the computational struc ture of a mammalian brain. DL algorithms have demonstrated resounding success in many com puter vision tasks and science domains such as high energy physics, computational chemistry and high performance computing use-cases. Several DL implementations such as TensorFlow, Caffe Theano, and Torch have become available. These implementations are primarily geared towards compute nodes that may contain multi-core architecture (such as Intel Xeon/KNC/KNL) and ol many-core architectures (GPUs).\nDL algorithms are under-going a tremendous revolution of their own. Widely used DL algorithm such as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) are com putationally expensive. Their computational requirements are further worsened by: 1) Very dee neural networks such as recently proposed 1000-layer complex Residual Networks (ResNet), 2) In creasing volume of data produced by simulations, experiments and handheld devices. An importan solution to these problems is the design and implementation of DL algorithms that are capable o execution on distributed memory large scale cluster/cloud computing systems. A few distributed DI implementations such as CaffeonSpark, Distributed TensorFlow, CNTK, Machine Learning Toolki on Extreme Scale (MaTEx), and FireCaffe have become available. Implementations such as CNTK FireCaffe and MaTEx use MPI (Gropp et al.]1996Geist et al.][1996) - which makes them a natura fit for high-end systems.\nDL algorithms primarily use gradient descent - an iterative technique in which the weights of synapes are updated using the difference between the ground truth (actual value) and the predicted value (using the current state of the neural network). The larger the difference, the steeper the de"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "scent to a minima (a low value of minima generates the solution). An important type of gradient descent is batch gradient descent - where a random subset of samples are used for iterative feed forward (calculation of predicted value) and back-propagation (update of synaptic weights). A small batch is prone to severe pertubations to the descent, while a large batch results in slow convergence. Hence, a data scientist tends to use a fairly average batch - which finds the balance between these two conflicting metrics.\nA large scale parallelization of gradient descent must maximize the equivalence to the default algo rithm, such that the convergence property is maintained. Consider a scenario where a batch (b) ir the original algorithm is split across multiple compute nodes (n) - an example of data parallelism To provide equivalence to the default algorithm, the batch must be split equally to , although the communication which would require an all-to-all reduction would increase as O(log n). Naturally as n is increased and b is held constant (strong scaling), this becomes prohibitive, whereas keeping the batch size per node b/n constant (weak scaling) increases the convergence time.\nSeveral researchers have proposed methods to alleviate the communication requirements of dis tributed gradient descent. Parameter-server based approaches use a server to hold the latest versior of the model while clients send computed gradients and request the latest model. This approach ha been proposed and extended by several researchers. While theoretically this provides O(1) time complexity since all batch updates can be computed simultaneously, this approach fails to scale beyond a few compute nodes when considering the time to convergence relative to having run th computation on a single device. Others have proven divergence from the original algorithm. Remot Direct Memory Access (RDMA) based approaches have been proposed, but they also diverge fron the original algorithm. Several other implementations are primarily geared towards shared memor systems, and address the thread contention issue for gradient descent."}, {"section_index": "3", "section_name": "1.1 CONTRIBUTIONS", "section_text": "Specifically, we make the following contributions in this paper:\nThe rest of the paper is organized as follows: In section[2] we present related work of our proposec research. We present the background in section [3] followed by an in-depth solution space in sec tion[4] In section[6] we present a detailed performance evaluation of asynchronous gradient descent. and conclusions with future directions in section\nOur objective is to design a non-parameter-server based technique, which maximizes the equivalence to the default algorithm, while leveraging high performance architectures - including computational units such as GPUs and high performance interconnects such as InfiniBand, Intel Omni-path archi. tectures by using MPI.\nWe design a baseline asynchronous gradient descent, which delays the gradient updates o the entire model by one or more iterations adaptively on the basis of available overlap anc user-defined input. We propose a layer-wise gradient descent method, which overlaps weight updates of a layer with inter-node synchronization of other layers. The proposed method is exactly equiavalent to the default sequential algorithm. We implement our approaches and other baseline techniques using the Machine Learning Toolkit for Extreme Scale (MaTEx), which consists of a distributed memory implementa tion of Caffe using MPI (Gropp et al.1996] Geist et al.][1996). We evaluate our approaches and other baseline implementations on a large scale CPU-basec InfiniBand cluster as well as on NVIDIA's DGX-1 multi-GPU system. We use several wel studied datasets and DNN topologies such as ImageNet (1.3M images, 250GB dataset with AlexNet and GoogleNet DNNs.\nOur evaluation indicates the efficacy of the proposed approach. Specifically, the best asynchronous approach is up to 1.7x faster than the synchronous approach while achieving up to 82% paralle efficiency."}, {"section_index": "4", "section_name": "2 RELATED WORK", "section_text": "Batch gradient descent is the most widely used algorithm for training Deep Learning models. This algorithm has been implemented several times for sequential, multi-core and many-core systems such as GPUs. The most widely used implementations are Caffe (Jia et al.|2014) (CPUs/GPUs) Warp-CTC (GPUs), Theano (Bastien et al.]2012) Bergstra et al.]2010) (CPUs/GPUs), Torch (Col lobert et al.]2002) (CPUs/GPUs), CNTK (Agarwal et al.]2014) (GPUs and Distributed Memory using MPI) and Google TensorFlow (Abadi et al.]2015) which use nVIDIA CUDA Deep Neura. Network (cuDNN).\nCaffe is one of the leading software tools for training and deploying deep learning algorithms, and it can be used to develop novel extensions to these algorithms such as the ones described below Caffe supports execution on a single node (connected with several GPUs) and a version has been implemented that takes full advantage of Intel systems. While the research described below was performed using Caffe, the extensions can be applied to Tensorflow as well..\nCaffe (and other deep learning software) is also equipped with several optimizations designed tc avoid significant problems in training deep networks. The vanishing gradient problem (Bianchini & Scarselli[[2014) causes deep networks to fail to learn much at all in the early layers, and was solved in (Hinton & Osindero2006) and (Bengio et al.] 2007) where it was shown that a network could be trained one layer at a time with autoencoders (Hinton & Salakhutdinov 2006), and then put together to form a single network (Vincent et al.||2010). Another optimization that helps to solve this problem is switching from sigmoidal neurons to rectified linear neurons.\nThe problem of accelerating gradient descent, especially disctributed across compute resources, is of. interest to many researchers. Approaches generally fall into two categories, whether or not they are equivalent to having run using a single compute device; utilizing a single compute device necessarily. computes gradient updates and applies them immediately to the model. Further, the gradient updates. can be classified as either synchronous or asynchronous depending on whether the communication of. the gradients can be overlapped with any computation of the gradients. For example, the DistBelief. parameter server approach (Dean et al.2012) computes gradient updates asynchronously based on an out-of-date copy of the model and applies them to the latest model. Though this is not equivalent. to having run on a single device, it is able to process samples much faster..\nChen et al.(2016) revisit asynchronous gradient descent and propose a few synchronous variants in. order to impove time to convergence. Notably, they show that waiting for all workers to complete aggregating the gradients, and applying the gradients to the same common model (thereby each. worker has a copy of the latest model) provides a good time to convergence while also leveraging. multiple compute devices. Their approach is where this paper begins while additionally proposing. approaches ranging from synchronous to parameter server variants.\nMachine Learning algorithms designed to emulate the computational structure of the brain to model data are called \"Neural Networks.'' The basic unit of a neural network is the neuron and neurons are connected to one another via synapses."}, {"section_index": "5", "section_name": "3.1.1 BACKPROPAGATION", "section_text": "Neural networks are trained through an algorithm called backpropagation. This is a means of com. puting gradients layer by layer to implement the gradient descent algorithm's update rule of\nw' w+XVwC b' b+XVbC\nwhere w are the weights, b the biases, A the learning rate, and C is a cost function to be optimized usually square error or cross-entropy. This rule is often replaced by a slightly more complex rule. such as Adaptive Gradient Descent (AdaGrad) (Duchi et al.| 2011) or Momentum (Qian1999)\nAlgorithm 1 Back Propagation\nDackPropagallon 1: input: Data X E Rnxp and labels Y E Rnxl 2: for i from 1 to n do. 3: Compute all z(l) and a(l) g(ne) =-(y-ane) O 0(z(ne)) 4: 5: for l from ne - 1 to 2 do g(l) = Wlg(l+1) O 0'(z(l)) 6: 7: end for 8: Vw(e)C = 8(l+1)a(e)T 9: Vg(e)C = g(l+1) 10: end for\nAlthough there are several nonlinear activation functions in common use, the networks examined ir this paper only include rectified linear units (ReLU) where ReLU(x) = max(0, x)..\nCaffe (Jia et al.]2014) is one of the leading software packages for building and training neura networks. It provides abstractions for a wide range of topologies and for training them with man different types of optimizers. Caffe provides abstractions for operations on multi-dimensional arrays (tensors) which are essential for implementing Deep Learning algorithms. From an input tensor, ar output tensor, and tensors for each hidden layer, Caffe constructs a computational graph that man ages these tensors and their updates as a single object. Caffe is particularly useful for researchers because it is heavily optimized and can be modified through an open source C++ backend.\nAs Caffe's runtime is implemented in C++, it can extract native performance from the computa. tion environment it is run on. Furthermore, Caffe abstracts GPU computations, leveraging nVIDIA. CUDA Deep Neural Network Library (cuDNN) for the task. We have modified this code for dis-. tributed memory computation on large scale systems using MPI to natively use network hardware for. optimal performance. The base, synchronous implementation is similar to FireCaffe (Iandola et al. 2015), another distributed memory implementation of Caffe. Further modifications are described in. Section4"}, {"section_index": "6", "section_name": "4 SOLUTION SPACE", "section_text": "The goal of improving gradient descent is to accelerate the time to solution without sacrificing the accuracy of the model. The base case to consider is then computing and applying gradients one batch at a time on a single compute device. One way to accelerate the computation while alsc maintaining equivalence to the sequential is to use data parallelism. Data parallelism is where the raditional batch is further subdivided into equally-sized mini-batches, each mini-batch is compute on separate devices, then the gradients resulting from each mini-batch is averaged together. Sinc each gradient update is itself an average, taking the average of the mini-gradients results in an update that is effectively the same as having computed the original batch size. This is called the effective batch size. Data parallelism is the approach we explore in this paper, attempting many ways o hiding the latency of the gradient communication that occurs between compute devices. We use MPI to communicate the gradients.\nTo compute the gradients, we set w(e), b(l) the weights and biases for each layer, z(l+1) w(e) a(e) + b(l) and a(l) = (z(e)), where o is the activation function. Let ne represent the number of layers. Then, we use Algorithm|1\nThere are three phases of computation within Caffe that pass over the enumerated layers of the network. First, the forward pass computes the output result given the samples from the input batch starting at the first layer. Next, starting at the last (output) layer, based on the difference between the output result and the ground truth, the backward pass uses the backpropagation technique to compute the gradients for each layer. Lastly, one final pass is made over the network to apply the gradients to the weights and biases before starting the process over again with the next batch.\nCaffe provides callback methods in its C++ interface that interject user-defined functionality into ke phases of the computation (see|3.2). Specifically, one user-defined function is executed immediately before the foward pass when the batch computation begins. The other user-defined function execute after the backward pass finishes, but before the application of the gradients to the weights and biases Additional callback functions were added to support finer-grained control over the three phases o computation. One of the additional callbacks executes after each gradient is computed during th backward phase, once per set of learnable parameters, such as the weights or biases of a given layei Another callback function that was added is called once per learnable parameter during the appl phase, just before the gradients are applied. Lastly, a callback function was added that turns the gradient application into a task queue, requesting additional tasks in an unspecified order until al gradients have been applied.\nA critical implementation detail for any of our proposed approaches is to make sure the individual. network models maintained by each compute device start from the same random initial conditions for the weights and biases. Before the first batch is computed, the weights and biases from the master. process are copied (broadcast) to the other processes. That way any gradients that are computed. when averaged together, are based on the same initial conditions.."}, {"section_index": "7", "section_name": "4.1 SYNCHRONOUS GRADIENT DESCENT", "section_text": "Similar to what Chen et al.(2016) proposes and what is implemented in FireCaffe (Iandola et al.. 2015), synchronous gradient descent averages the gradients from each mini-batch together before. applying them, forming one complete batch at a time. The way this is implemented in Caffe is to. use the callback function that executes when all gradients are ready to be applied. During this call. back, MPI_Allreduce is used to sum the gradients, placing the same resulting sum on each compute. device. This function is blocking, meaning it returns control back to Caffe only after the sum is. computed across all devices. Since the result is a sum and not the intended average, it is then scaled. down based on the number of compute devices in use. It is important to note that the reductior operation can be performed in-place, meaning it can use the memory location directly holding the. gradient without performing any costly memory copies, especially for networks with a large numbei. of parameters such as AlexNet. This approach also has the important quality that the gradients are. averaged after they have been used by each layer of the backpropagation, preserving the importance. of any activations within the network against the mini-batch instead of against the effective batch"}, {"section_index": "8", "section_name": "4.2 LAYER-WISE GRADIENT DESCENT", "section_text": "Chen et al.[(2016) proposes the pipelining of gradient computation and application. For example the gradients of upper layers can be concurrently applied while computing the gradients of lower layers. This approach must be done carefully to maintain equivalence with the sequential base case We make the observation that gradients can be averaged as soon as they are computed during the backward phase, instead of waiting for all gradients to be computed. However, adjacent layers wil use and/or update the gradients of layers that have otherwise finished computing their gradients This implies the averaging of the gradients must be performed on a copy of the gradients rather thar in-place. Further, the averaging of the copied gradients must finish before they can be applied.\nWe utilize a background thread of computation in order to perform the gradient averaging concurren. with the remaining gradient computation. This provides maximal overlap of the communicatior. latency with useful computation. There are a few options when to apply the averaged gradients. Waiting for all communication to finish before applying all gradients is straightfoward and similar tc. the synchronous approach described previously, though perhaps at least some of the communicatior. latency would be overlapped. Another approach is to wait, one layer at a time, for the gradients. for a particular layer to finish averaging and then apply the gradients. It is intuitive to perform the waiting in the same order in which backpropagation was performed, from the last layer to the firs layer. Lastly, since all gradient updates are independent, we can perform them in an arbitrary order This takes advantage of the observation that not all layers have the same number of parameters, anc. further, the gradients for the weights and the gradients for the biases can be averaged separately. the size of the weight gradients are typically larger than the bias gradients, implying that the bias. gradients will complete their communication more quickly. Since the communcation of the various parameters can finish somewhat arbitrarily based on when the communication was initiated and the.\nsize of the communication, we can apply the gradients as soon as they complete their averaging. W evaluate these strategies in6"}, {"section_index": "9", "section_name": "4.3 ASYNCHRONOUS GRADIENT DESCENT", "section_text": "As stated in (Chen et al.]2016), parameter server implementations suffer from poor convergence since gradient updates are calculated based on out-of-date networks. Continuing with our data par allel approach, there is a lower limit to the size of the mini-batches and therefore the number o compute devices that can be utilized. As the amount of work per compute device decreases pro portional to the decreasing size of the mini-batches, there is less computation available to mask th latency of the gradient averaging across the devices. Initiating the averaging layer-wise as describec above may not be enough to mitigate this problem.\nWe propose delaying the application of the gradients by a fixed number of iterations much smalle than the number of compute devices as would have been done in a parameter server approach The gradients are delayed by using a concurrent communication thread and applying the gradier one, two, or three iterations later thus giving the averaging enough time to complete as needec If the gradient needs to be delayed by one iteration, this requires one communication thread an one additional buffer to hold the gradient; delaying by two iterations requires two communicatio threads and two additional buffers and so on. This approach is somewhere between a paramete server (Dean et al.|2012) and the various approaches that maintain equivalency with a sequentia computation.\nThe implementations evaluated in this paper focus on data parallelism and the averaging of gradients across compute devices. This is achieved using MPI and parallel I/O..\nThe data parallelism is achieved by distributing datasets across compute devices, partitioning ther based on the number of devices utilized; each device receives a disjoint subset of the dataset an no samples are shuffled or exchanged between compute devices outside of the gradient averaging Caffe frequently uses a database in LMDB format for its datasets, however this format cannot b used on remote (network) filesystems or even between processes on the same host. Caffe mitigate this issue when using more than one GPU on the same host by using a single I/O reading threa and a round-robin deal of the samples to device-specific queues. Our implementations mitigate thi issue by first converting an LMDB database into a netCDF file (Rew & Davis]1990). netCDF file can be read and partitioned using paralle1 MPI-IO via the parallel netCDF library (Li et al.[|2003)."}, {"section_index": "10", "section_name": "5.2 DISTRIBUTED MEMORY IMPLEMENTATION USING MPI", "section_text": "For single-node GPU computation, using one or more GPU devices in a single host, Caffe provides a means of allocating one contiguous buffer to hold the data for the weights and biases and a second buffer to hold the gradients for each. We extended this approach for CPU hosts. A single contigous buffer allows the non-layer-wise, i.e., network-wise gradient averages to be performed using a single MPI reduction operation. The layer-wise implementations require one MPI reduction operation per network parameter. There is a fixed cost to start a communication primitive regardless of how much data is communicated. It is sometimes beneficial to aggregate otherwise many small communication requests into a larger one\nAlthough Caffe provides a way of utilizing all GPUs within the host, it does not currently leverage NVIDIA's NCCL package (NVIDIA Corporation2015) for optimized, high-bandwidth collective communication routines. We used the NCCL equivalent to the MPI all reduction to sum gradients across GPU devices on the DGX-1 platform.\nIn this section, we present an experimental evaluation and analysis of the heuristics described in section4"}, {"section_index": "11", "section_name": "6.1 HARDWARE ARCHITECTURES", "section_text": "We evaluate using a CPU cluster as well as NVIDIA's speialized DGX-1 multi-GPU host system Each node of the multi-node cluster consists of a multi-core Intel Sandybridge CPU connected via InfiniBand. We use Intel MPI 5.1.2 for performance evaluation. The heuristics are implemented in Caffe (Jia et al.]2014), specifically the intelcaffe branch designed to optimize performance on Intel CPUs.\nThe DGX-1 system contains 8 Pasca1 GPUs connected using the high-speed NVlink interconnect For the DGX-1 evaluations, the latest version of Berkley's Caffe was modified to use the NCCI communicaiton primitives in addition to our algorithmic changes"}, {"section_index": "12", "section_name": "6.2 IMAGENET AND NETWORK ARCHITECTURES", "section_text": "We evaluate on two distinct network architectures trained on the ImageNet dataset. ImageNet refers specifically to the ILSVRC2015 (Russakovsky et al.]2015) dataset. This dataset consists of a train- ing set of just under 1.3 million images of various sizes (as jpg files) divided among 1000 classes,. along with a validation set consisting of 5oooo images of the same type and classes. Additionally,. for the competition, there is a testing set, but it is held separately and not available publicly. It is. established as one of the benchmark dataset for machine learning with large datasets, and among the famous architectures that achieved record top 1 and top 5 accuracies on it are AlexNet (Krizhevsky et al.]2012) and GoogLeNet (Szegedy et al.2015).\nWe evaluate on AlexNet and GoogLeNet because they are now well-established models with known training regimes and loss curves. They also demonstrate two different regimes for paralleliza- tion: AlexNet has approximately 60 million parameters that need to be communicated, whereas GoogLeNet has approximately 4 million. In contrast to the smaller amount of communication for. GoogLeNet, it requires roughly twice the amount of time to process a each image than AlexNet does when communication is ignored.\nThese results show that delaying the gradient updates by one or more iterations is the most effective. means of hiding the communication latency. The layer-wise approaches did not perform as well as expected. These trends were consistent across both hardware platforms.\nThe layer-wise approaches, though promising as equivalent to a sequential computation, were not. able to complete their gradient averages quickly enough. Compared to the delayed gradient ap. proach, this is perhaps intuitive. The delayed gradient approach is able to hide the communication latency across all three complete phases of the computation whereas the layer-wise approaches only. have as long as it takes to complete the backpropagation phase. This is not enough time to complete the communication, especially as the mini-batch sizes decrease and therefore provide less work to. mask the communication.\nIn addition to looking at the time per batch above, the rates of convergence of these heuristics mus be evaluated. All of the heuristics completed training AlexNet to the standard top-1 accuracy of 54% using the default AlexNet settings that come with Caffe. However, it is worth noting that a the beginning of training, they showed different loss curves showing that there is a tradeoff betweer number of batches per second and accuracy at a given batch as shown in Table|1\nFigure 1 compares the implemented approaches relative to a communication-less baseline \"no comm\". The effective batch sizes were 256 and 32 for AleNet and GoogLeNet, respectively. For example, using 8 compute devices for GoogLeNet uses a mini-batch size of 32/8 = 4. The evalu- ation on DGX-1 were limited to 8 compute devices whereas the CPU cluster evaluation eventually hit the strong scaling limit for data parallelism.\n3.5 25 3 20 peeenr enn p 2.5 peeeoe eon 1 2 2 15 1 I feronnss -4 -2 1.5 I ferrennnss 8 -4 10 16 8 32 0.5 5 0 no comm SGD SGD Layer- AGD 1 AGD 2 SGD task- SGD task- wise comm comm wise, 1 wise, 2 0 comm comm no comm SGD AGD 1 comm AGD 2 comm AGD 3 comm (a) AlexNet CPU (b) AlexNet DGX-1 3 30 2.5 25 2 20 -1 1 1.5 2 ferrrrnnss 15 2 -4 -4 1 8 8 16 0.5 5 0 no comm SGD SGD Layer- AGD 1 AGD 2 SGD task- SGD task- wise comm comm wise, 1 wise, 2 0 comm comm no comm SGD AGD 1 comm AGD 2 comm AGD 3 comm (c) GoogLeNet CPU (d) GoogLeNet DGX-1\nFigure 1: Evaluation of SGD and AGD approaches. Effective batch sizes were 256 and 32 for AlexNet anc GoogLeNet, respectively.\nTable 1: AlexNet Accuracy After Every 10o0 Batches on DGX-\nThere is a tradeoff between maintaining equivalence to sequential methods versus leveraging the. vast computational resources available for gradient descent. We find that asynchronous methods. can give a 1.7x speedup while not sacrificing accuracy at the end of an otherwise identical training regime. This improvement was achieved without the need for a warm start, contrary to previously. published results using parameter servers.\nbatch 1000 2000 3000 4000 5000 serial, 1 GPU 0.0124 0.05164 0.10102 0.13432 0.16454 SGD 0.01116 0.03984 0.07594 0.10622 0.13052 AGD, 1 comm 0.0039 0.01324 0.02632 0.05076 0.07362 AGD, 2 comm 0.00104 0.00356 0.00636 0.01282 0.01688\nWe also evaluated whether these approaches converged in addition to just improving the number of. iterations per second. All approaches evaluated managed to converge within the exepcted number of iterations. Notably, AlexNet on DGX-1 reached convergence in 11 hours using the delayed gradient approach and two communication threads using the standard AlexNet network from Caffe.."}, {"section_index": "13", "section_name": "REFERENCES", "section_text": "Monica Bianchini and Franco Scarselli. On the complexity of neural network classifiers: A com parison between shallow and deep architectures. IEEE Transactions on Neural Networks and Learning Systems. 25(8):1553 - 1565. 2014. doi: 10.1109/TNNLS.2013.2293637.\nJeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Marc'aurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, Quoc V. Le, and Andrew Y. Ng. Large scale distributed deep networks. In P. Bartlett, F.c.n. Pereira, C.j.c. Burges, L. Bottou, and K.q. Wein- berger (eds.), Advances in Neural Information Processing Systems 25, pp. 1232-1240. 2012. URL http://books.nips.cc/papers/files/nips25/nIPs2012 0598.pdf\nJohn Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res., 12:2121-2159. July 2011. ISsN 1532-4435. URL http://dl.acm.0rg/citation.cfm?id=1953048.2021068\nMartin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah,. Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vin-. cent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Watten-. berg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning. on heterogeneous systems, 2015. URLhttp: //tensorf1ow. org/ Software available from. tensorflow.org.\nAmit Agarwal, Eldar Akchurin, Chris Basoglu, Guoguo Chen, Scott Cyphers, Jasha Droppo, Adam Eversole, Brian Guenter, Mark Hillebrand, Ryan Hoens, Xuedong Huang, Zhiheng Huang Vladimir Ivanov, Alexey Kamenev, Philipp Kranen, Oleksii Kuchaiev, Wolfgang Manousek Avner May, Bhaskar Mitra, Olivier Nano, Gaizka Navarro, Alexey Orlov, Marko Padmilac Hari Parthasarathi, Baolin Peng, Alexey Reznichenko, Frank Seide, Michael L. Seltzer, Mal- colm Slaney, Andreas Stolcke, Yongqiang Wang, Huaming Wang, Kaisheng Yao, Dong Yu Yu Zhang, and Geoffrey Zweig. An introduction to computational networks and the compu- tational network toolkit. Technical Report MSR-TR-2014-112, August 2014.URLhttp: researchmicrosof+com/ a snx?i d=22 6641\nFrederic Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud Bergeron, Nicolas Bouchard, and Yoshua Bengio. Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012.\nYoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle. Greedy layer-wise training. of deep networks. In B. Scholkopf, J. C. Platt, and T. Hoffman (eds.), Advances in Neural Infor. mation Processing Systems 19, pp. 153-160. MIT Press, 2007. URL http : / /papers . nips . cc/paper/3048-qreedy-layer-wise-training-of-deep-networks.pdf\names Bergstra, Olivier Breuleux, Frederic Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference (SciPy), June 2010. Oral Presentation.\nJianmin Chen, Rajat Monga, Samy Bengio, and Rafal Jozefowicz. Revisiting distributed syn chronous SGD. CoRR, abs/1604.00981, 2016. URL http://arxiv.org/abs/1604. 00 981\nRonan Collobert, Samy Bengio, and Johnny Marithoz. Torch: A modular machine learning softwar library, 2002.\nG E Hinton and R R Salakhutdinoy. Reducing the dimensionality of data witl. neuralnetworks. Science,313(5786):504-507,July 2006. doi: 10.1126/science 1127647. URL http://www.ncbi.nlm.nih.gov/sites/entrez?db=pubmed8 uid=16873662&cmd=showdetailview&indexed=google\nGeoffrey E. Hinton and Simon Osindero. A fast learning algorithm for deep belief nets. Neura Computation, 18:2006, 2006.\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep. convolutional neural networks. In F. Pereira, C.J.C. Burges, L. Bottou, and K.Q. Wein-. berger (eds.), Advances in Neural Information Processing Systems 25, pp. 1097-1105. Cur-. ran Associates, Inc., 2012. URL http://papers.nips.cc/paper/4824-imagenet- classification-with-deep-convolutional-neural-networks.pdf\nPascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res., 11:3371-3408, December 2010. ISSN 1532-4435 URLhttp://dl.acm.org/citation.cfm?id=1756006.1953039\nianwei Li, Wei-keng Liao, Alok Choudhary, Robert Ross, Rajeev Thakur, William Gropp, Robert Latham, Andrew Siegel, Brad Gallagher, and Michael Zingale. Parallel netcdf: A high performance scientific i/o interface. In Supercomputing, 2003 ACM/IEEE Conference, pp. 39-39 IEEE, 2003."}] |
BJKYvt5lg | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Building high-quality generative models of natural images has been a long standing challenge. Al- though recent work has made significant progress (Kingma & Welling2014van den Oord et al. 2016a b), we are still far from generating convincing, high-resolution natural images.\nMany recent approaches to this problem are based on an efficient method for performing amor. tized, approximate inference in continuous stochastic latent variables: the variational autoencode (VAE) (Kingma & Welling2014) jointly trains a top-down decoder generative neural network witl. a bottom-up encoder inference network. VAEs for images typically use rigid decoders that mode. the output pixels as conditionally independent given the latent variables. The resulting model learn. a useful latent representation of the data and effectively models global structure in images, but has. difficulty capturing small-scale features such as textures and sharp edges due to the conditional inde. pendence of the output pixels, which significantly hurts both log-likelihood and quality of generate. samples compared to other models.\nPixelCNNsvan den Oord et al. 2016a b) are another state-of-the-art image model. Unlike VAEs PixelCNNs model image densities autoregressively, pixel-by-pixel. This allows it to capture fin details in images, as features such as edges can be precisely aligned. By leveraging carefully con structed masked convolutions (van den Oord et al.|2016b), PixelCNNs can be trained efficiently ii parallel on GPUs. Nonetheless, PixelCNN models are still very computationally expensive. Unlik typical convolutional architectures they do not apply downsampling between layers, which mean that each layer is computationally expensive and that the depth of a PixelCNN must grow linearl with the size of the images in order for it to capture dependencies between far-away pixels. Pix. elCNNs also do not explicitly learn a latent representation of the data, which can be useful fo. downstream tasks such as semi-supervised learning.\nCorresponding author; igu1222@gmai1. com"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Figure 1: Samples from hierarchical PixelVAE on the LSUN bedrooms dataset"}, {"section_index": "2", "section_name": "2 RELATED WORK", "section_text": "There have been many recent advancements in generative modeling of images. We briefly discus some of these below, especially those that are related to our approach..\nThe Variational Autoencoder (VAE) (Kingma & Welling 2014) is a framework to train neural net works for generation and approximate inference jointly by optimizing a variational bound on the data log-likelihood. The use of normalizing flows (Rezende & Mohamed,2015) improves the flex. ibility of the VAE approximate posterior. Based on this, Kingma et al. (2016) develop an efficient formulation of an autoregressive approximate posterior model using MADE (Germain et al.]2015) In our work, we avoid the need for such flexible inference models by using autoregressive priors.\nThe idea of using autoregressive conditional likelihoods in VAEs has been explored in the context of language modeling in (Bowman et al.|2016), however in that work the use of latent variables fails to improve likelihood over a purely autoregressive model.\nWe present PixelVAE, a latent variable model which combines the largely complementar. advantages of VAEs and PixelCNNs by using PixelCNN-based masked convolutions in th. conditional output distribution of a VAE. We extend PixelVAE to a hierarchical model with multiple stochastic layers and autore. gressive decoders at each layer. This lets us autoregressively model not only the outpu. pixels but also higher-level latent feature maps. On MNIST, we show that PixelVAE: (1) establishes a new state-of-the-art likelihood, (2 performs comparably to PixelCNN using far fewer computationally expensive autoregres. sive layers, (3) learns more compressed latent codes than a standard VAE while still ac. counting for most non-trivial structure, and (4) learns a latent code which separates digit better than a standard VAE. We evaluate hierarchical PixelVAE on two challenging natural image datasets (64 6. ImageNet and LSUN bedrooms). On 64 64 ImageNet, we report likelihood competitiv. with the state of the art at significantly less computational cost. On LSUN bedrooms. we generate high-quality samples and show that hierarchical PixelVAE learns to mode. different properties of the scene with each of its multiple layers..\nReconstructic Latentd Image OR Variables Sample OR concat OR Encoder Decoder PixelCNN layers 1 z ~ p(z Generation: Training: Teacher forcing Autoregressive sampling\nFigure 2: Our proposed model, PixelVAE, makes use of PixelCNN to model an autoregressive de coder for a VAE. VAEs, which assume (conditional) independence among pixels, are known to suffer from blurry samples, while PixelCNN, modeling the joint distribution, produces sharp samples, but lack a latent representation that might be more useful for downstream tasks. PixelVAE combines the best of both worlds, providing a meaningful latent representation, while producing sharp samples\nAnother promising recent approach is Generative Adversarial Networks (GANs) (Goodfellow et al. 2014), which pit a generator network and a discriminator network against each other. Recent work has improved training stability (Radford et al.[2015) Salimans et al.]2016) and incorporated in- ference networks into the GAN framework (Dumoulin et al.2016, Donahue et al.]2016). GANs generate compelling samples compared to our work, but still exhibit unstable training dynamics and are known to underfit by ignoring modes of the data distribution (Dumoulin et al.]2016). Further, it is difficult to accurately estimate the data likelihood in GANs."}, {"section_index": "3", "section_name": "3 PIXELVAE MODEL", "section_text": "Like a VAE, our model jointly trains an \"encoder' inference network, which maps an image x to a posterior distribution over latent variables z, and a \"decoder\"' generative network, which models a distribution over x conditioned on z. The encoder and decoder networks are composed of a series of convolutional layers, respectively with strided convolutions for downsampling in the encoder and transposed convolutions for upsampling in the decoder.\np(x|z) =]1z p(xi|X1,.,Xi-1, i\nWe first transform z through a series of convolutional layers into feature maps with the same spatial resolution as the output image and then concatenate the resulting feature maps with the image The resulting concatenated feature maps are then further processed by several PixelCNN masked convolutional layers and a final PixelCNN 256-way softmax output.\nUnlike typical PixelCNN implementations, we use very few PixelCNN layers in our decoder, relying. on the latent variables to model the structure of the input at scales larger than the combined receptive\nSimultaneously to our work, Chen et al.(2016) present a VAE model for images with an an autore-. gressive output distribution. In constrast to|Chen et al.(2016), who focus on models with a single. layer of latent variables, we also investigate models with a hierarchy of latent variables (and cor responding autoregressive priors) and show that they enable us to scale our model to challenging. natural image datasets.\nAs opposed to most VAE decoders that model each dimension of the output independently (for example, by modeling the output as a Gaussian with diagonal covariance), we use a conditional PixelCNN in the decoder. Our decoder models x as the product of each dimension x; conditioned on all previous dimensions and the latent variable z:\nDkLqzLx)]|pz nL ZL ZL DkL(q(z-1x)p(z-1zD) DkL(q(z1|x)|[p(z1|z2)) h1 Z1 -Ez1~q(z1|x) log p(x|z1)\nFigure 3: We generate top-down through a hierarchical latent space decomposition. The inference network generates latent variables by composing successive deterministic functions to compute pa rameters of the stochastic random variables. Dotted lines denote contributions to the cost.\nThe performance of VAEs can be improved by stacking them to form a hierarchy of stochastic laten. variables: in the simplest configuration, the VAE at each level models a distribution over the laten. variables at the level below, with generation proceeding downward and inference upward throug. each level (i.e. as in Fig. 3). In convolutional architectures, the intermediate latent variables ar. ypically organized into feature maps whose spatial resolution decreases toward higher levels.\nOur model can be extended in the same way. At each level, the generator is a conditional PixelCNN over the latent features in the level below. This lets us autoregressively model not only the output distribution over pixels but also the prior over each set of latent feature maps. The higher-level PixelCNN decoders use diagonal Gaussian output layers instead of 256-way softmax, and model the dimensions within each spatial location (i.e. across feature maps) independently. This is done for simplicity, but is not a limitation of our model.\nThe output distributions over the latent variables for the generative and inference networks decom pose as follows (see Fig.3)\nZ1,ZL)=pzL)pZL-1ZL).pz1Z2 Z1,zLx)=qz1x).qzLx\nWe optimize the negative of the evidence lower bound (sum of data negative log-likelihood and KL-divergence of the posterior over latents with the prior).\nDkL(q(z1x)|p(zL) nL ZL DkL(q(z-1x)p(z-1zD) ZL- DkL(q(z1|x)|[p(z1|z2) h1 Ez1~q(z1|x) log p(x|z1) C\nield of our PixelCNN layers. As a result of this, our architecture captures global structure at a much ower computational cost than a standard PixelCNN implementation..\nL(x,q,p) = -Ez1~q(z1|x) logp(x|z1) + DkL(q(z1,zL|x)||p(z1,,zL) L q(Zi|x -Ez1~q(z1|x) l0g p(x|z1) + II q(zj|x) I 10g ..dZ 1 p(Zi|Zi+1 i=1 z1,,zL j=1 L -Ez1~q(z1|x) l0g p(x|z1) + I q(zj|x) log p(Zi|Zi+1 L q(zi|x) Ez1~9(z1|x) l0g p(x|z1) + q(Zi+1|x)q(zi|x) l0g p(zi|Zi+1 i=1zi,Zi+1\nNote that when specifying an autoregressive prior over each latent level z, we can leverage masked. convolutions (van den Oord et al.|2016b) and samples drawn independently from the approximate posterior q(zi x) (i.e. from the inference network) to train efficiently in parallel on GPUs..\nTable 1: We compare performance of different models on binarized MNIST. \"PixelCNN\" is the model described in van den Oord et al. (2016a). Our corresponding latent variable model is \"Pixel- VAE\". \"Gated PixelCNN\" and \"Gated PixelVAE\" use the gated activation function in|van den Oord et al.(2016b). In \"Gated PixelVAE without upsampling\", a linear transformation of latent variable conditions the (gated) activation in every PixelCNN layer instead of using upsampling layers.\nWe evaluate our model on the binarized MNIST dataset (Salakhutdinov & Murray 2008; Lecun et al.[1998) and report results in Table[1 We also experiment with a variant of our model in whicl each PixelCNN layer is directly conditioned on a linear transformation of latent variable, z (rather than transforming z first through several upsampling convolutional layers) (as in (van den Oord et al. 2016b) and find that this further improves performance, achieving an NLL upper bound comparable with the current state of the art. We estimate the marginal likelihood of our MNIST model using the importance sampling technique in Burda et al.(2015), which computes a lower bound on the likelihood whose tightness increases with the number of importance samples per datapoint. We use N = 5000 samples per datapoint (higher values don't appear to significantly affect the likelihood estimate) and achieve state-of-the-art likelihood."}, {"section_index": "4", "section_name": "4.1.1 USING FEW PIXELCNN LAYERS", "section_text": "The masked convolutional layers in PixelCNN are computationally expensive because they operat. at the full resolution of the image and in order to cover the full receptive field of the image, PixelCNI typically needs a large number of them. One advantage of our architecture is that we can achiev. strong performance with very few PixelCNN layers, which makes training and sampling from ou. model significantly faster than PixelCNN. To demonstrate this, we compare the performance of ou. model to PixelCNN as a function of the number of PixelCNN layers (Fig. 4a). We find that witl. fewer than 10 autoregressive layers, our PixelVAE model performs much better than PixelCNN. This is expected since with few layers, the effective receptive field of the PixelCNN output units i. too small to capture long-range dependencies in the data..\nWe also observe that adding even a single PixelCNN layer has a dramatic impact on the NLL bound of PixelVAE. This is not surprising since the PixelCNN layer helps model local characteristics which\n= -Ez1~q(z1|x) l0g p(x|z1) + Ezi+1~q(zi+1|x)[DKL(q(Zi|x)|p(Zi|Zi+1)) i=1\nKL-divergence Reconstruction 90 98 Gated PixeVAE NLL bound 80 96 Gated Pixe|CNN NLL 94 70 punog udddn 60 92 50 90 88 40 T7N 86 30 84 20 e 82 10 80 0 2 4 6 8 10 12 14 12 3 0 456 7 8 9 10 11 12 13 21 #PixelCNN layers #PixelCNN Iayers (a) (b)\nFigure 4: (a) Comparison of Negative log-likelihood upper bound of PixelVAE and NLL for Pixel CNN as a function of the number of PixelCNN layers used. (b) Cost break down into KL divergence and reconstruction cost."}, {"section_index": "5", "section_name": "4.1.2 LATENT VARIABLE INFORMATION CONTENT", "section_text": "Because the autoregressive conditional likelihood function of PixelVAE is expressive enough t. model some properties of the image distribution, it isn't forced to account for those propertie. through its latent variables as a standard VAE is. As a result, we can expect PixelVAE to lear. latent representations which are invariant to textures, precise positions, and other attributes whicl. are more efficiently modeled by the autoregressive decoder. To empirically validate this, we traii. PixelVAE models with different numbers of autoregressive layers (and hence, different PixelCNN. receptive field sizes) and plot the breakdown of the NLL bound for each of these models into th. reconstruction term log p(x[z) and the KL divergence term DkL(q(z|x)||p(z)) (Fig.4b). The KI divergence term can be interpreted as a measure of the information content in the posterior distri. bution q(z[x) (in the sense that in expectation, samples from q(z[x) require K L(q[p) fewer bits tc. code under a code optimized for q than under one optimized for p (Burnham & Anderson2003). and hence, models with smaller KL terms encode less information in their latent variables..\nWe observe a sharp drop in the KL divergence term when we use a single autoregressive layer compared to no autoregressive layers, indicating that the latent variables have been freed from having to encode small-scale details in the images. Since the addition of a single PixelCNN layer allows the decoder to model interactions between pixels which are at most 2 pixels away from each other (since our masked convolution filter size is 5 5), we can also say that most of the non-trivial (long-range) structure in the images is still encoded in the latent variables."}, {"section_index": "6", "section_name": "4.1.3 LATENT REPRESENTATIONS", "section_text": "On MNIST, given a sufficiently high-dimensional latent space, VAEs have already been shown t learn representations in which digits are well-separated (Sonderby et al.]2016). However, this tasl becomes more challenging as the capacity of the latent space is decreased. PixelVAE's flexibl output distribution should allow it to learn a latent representation which is invariant to small detail and thus better models global factors of variation given limited capacity..\nTo test this. we train a PixelVAE with a two-dimensional latent space, and an equivalent VAE. We visualize the distribution of test set images in latent space and observe that PixelVAE's latent. representation separates digits significantly better than VAE (Figure|5). To quantify this difference we train a K-nearest neighbors classifier in the latent space of each model and find that PixelVA\nare complementary to the global characteristics which a VAE with a factorized output distribution models.\nVAE PixelVAE (a) (b)\nFigure 5: Visualization of the MNIST test set in the latent space of (a) convolutional VAE and (b PixelVAE with two latent dimensions. PixelVAE separates classes more completely than VAE\nFigure 6: We visually inspect the variation in image features captured by the different levels of stochasticity in our model. For the two-level latent variable model trained on 64 64 LSUN bed- rooms, we vary only the top-level sampling noise (top) while holding the other levels constant. vary only the middle-level noise (middle), and vary only the bottom (pixel-level) noise (bottom) It appears that the top-level latent variables learn to model room structure and overall geometry, the middle-level latents model color and texture features, and the pixel-level distribution models low-level image characteristics such as texture, alignment, shading.\nsignificantly outperforms VAE, achieving a test error of 7.2% compared to VAE's 22.9%. We alsc note that unlike VAE, PixelVAE learns a representation in which digit identity is largely disentangled. from other generative factors."}, {"section_index": "7", "section_name": "4.2 LSUN BEDROOMS", "section_text": "To evaluate our model's performance with more data and complicated image distributions, we per form experiments on the LSUN bedrooms dataset (Yu et al.J2015). We use the same preprocessing as in Radford et al.(2015) to remove duplicate images in the dataset. For quantitative experiments we use a 32 32 downsampled version of the dataset, and we present samples from a model trained on the 64 64 version.\nWe train a two-level PixelVAE with latent variables at 1 1 and 8 8 spatial resolutions. We find tha this outperforms both a two-level convolutional VAE with diagonal Gaussian output and a single level PixelVAE in terms of log-likelihood and sample quality. We also try replacing the PixelCNN layers at the higher level with a diagonal Gaussian decoder and find that this hurts log-likelihood which suggests that multi-scale PixelVAE uses those layers effectively to autoregressively model latent features.\nFigure 7: Samples from hierarchical PixelVAE on the 64x64 ImageNet dataset"}, {"section_index": "8", "section_name": "4.2.1 FEATURES MODELED AT EACH LAYER", "section_text": "To see which features are modeled by each of the multiple layers, we draw multiple samples while varying the sampling noise at only a specific layer (either at the pixel-wise output or one of the latent layers) and visually inspect the resulting images (Fig.6). When we vary only the pixel- level sampling (holding z1 and z2 fixed), samples are almost indistinguishable and differ only in precise positioning and shading details, suggesting that the model uses the pixel-level autoregressive distribution to model only these features. Samples where only the noise in the middle-level (8 8) latent variables is varied have different objects and colors, but appear to have similar basic room geometry and composition. Finally, samples with varied top-level latent variables have diverse room geometry.\nThe 64 64 ImageNet generative modeling task was introduced in (van den Oord et al.2016a) and involves density estimation of a difficult, highly varied image distribution. We trained a heirarchical PixelVAE model (with a similar architecture to the model in section4.2) on 64 64 ImageNet and report validation set likelihood in Table[2] Our model achieves a likelihood competitive with[van den Oord et al. (2016a b), despite being substantially less computationally complex. A visual inspection of ImageNet samples from our model (Fig.7) also reveals them to be significantly more globally coherent than samples from PixelRNN.\nTable 2: Model performance on 64 64 ImageNet. We achieve competitive NLL at a fraction of th computational complexity of other leading models..\nModel NLL Validation (Train) FLOPs Convolutional DRAW (Gregor et al. 2016 4.10 (4.04) Real NVP (Dinh et al.]2 2016 = 4.01 (3.93) PixelRNN (van den Oord et al. 2016a = 3.63 (3.57) 154 x 109 Gated PixelCNN (van den Oord et al.) 2016b = 3.57 (3.48) 134 x 109 Hierarchical PixelVAE 3.62 (3.55) 63 109"}, {"section_index": "9", "section_name": "CONCLUSIONS", "section_text": "In this paper, we introduced a VAE model for natural images with an autoregressive decoder tha achieves strong performance across a number of datasets. We explored properties of our model. showing that it can generate more compressed latent representations than a standard VAE and that i1. can use fewer autoregressive layers than PixelCNN. We established a new state-of-the-art on bina rized MNIST dataset in terms of likelihood on 64 64 ImageNet and demonstrated that our mode generates high-quality samples on LSUN bedrooms..\nThe ability of PixelVAE to learn compressed representations in its latent variables by ignoring the small-scale structure in images is potentially very useful for downstream tasks. It would be interest- ing to further explore our model's capabilities for semi-supervised classification and representation learning in future work."}, {"section_index": "10", "section_name": "ACKNOWLEDGMENTS", "section_text": "The authors would like to thank the developers of Theano (Theano Development Team. 2016) an( Blocks and Fuel (van Merrienboer et al.2015). We acknowledge the support of the following agencies for research funding and computing support: Ubisoft, Nuance Foundation, NSERC, Cal. cul Quebec, Compute Canada, CIFAR, MEC Project TRA2014-57088-C2-1-R, SGR project 2014 SGR-1506 and TECNIOspring-FP7-ACCI grant."}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Ben gio. Generating sentences from a continuous space. 2016.\nYuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. arXi preprint arXiv:1509.00519, 2015.\nXi Chen, Diederik P Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya. Sutskever. and Pieter Abbeel. Variational Lossy Autoencoder. arXiv.org. November 2016\nLaurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using Real NVP arXiv.org, May 2016\nJeff Donahue, Philipp Krahenbuhl, and Trevor Darrell. Adversarial feature learning. CoRR abs/1605.09782,2016. URLhttp://arxiv.0rg/abs/1605.09782\nKarol Gregor, Frederic Besse, Danilo Jimenez Rezende, Ivo Danihelka, and Daan Wierstra. Towards Conceptual Compression. arXiv.org, April 2016.\nDiederik P. Kingma, Tim Salimans, and Max Welling. Improving variational inference with inverse autoregressive flow. CoRR, abs/1606.04934, 2016.\nKenneth P. Burnham and David R. Anderson. Model selection and multi-model inference, 2nd ed\nVincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropi etro, and Aaron Courville. Adversarially learned inference. CoRR, abs/1606.00704, 2016\nMatthieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. Made: Masked autoencoder for distribution estimation. CoRR, abs/1502.03509, 2015. URLhttps : //arxiv. org/ abs/ 1502.03509\nIan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Infor mation Processing Systems. pp. 2672-2680. 2014.\nDanilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. Ir International Conference on Machine Learning (ICML), 2015..\nJason Tyler Rolfe. Discrete variational autoencoders. arXiv preprint arXiv:1609.02200, 2016.\nTim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen Improved techniques for training gans. CoRR, abs/1606.03498, 2016\nCasper Kaae Sonderby, Tapani Raiko, Lars Maalge, Soren Kaae Sonderby, and Ole Winther. Ladder Variational Autoencoders. arXiv.org, February 2016.\nTheano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints, abs/1605.02688, May 2016. URL http://arxiv.org/abs/ 1605.02688\nAaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural network In International Conference on Machine Learning (ICML), 2016a.\nAaron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Koray Kavukcuoglu. Conditional image generation with pixelcnn decoders. CoRR, abs/1606.05328 2016b. URLhttp://arxiv.0rg/abs/1606.05328\nBart van Merrienboer, Dzmitry Bahdanau, Vincent Dumoulin, Dmitriy Serdyuk, David Warde Farley, Jan Chorowski, and Yoshua Bengio. Blocks and fuel: Frameworks for deep learning arXiv preprint, abs/1506.00619,2015. URLhttp://arxiv.0rg/abs/1506.00619\nFisher Yu, Yinda Zhang, Shuran Song, Ari Seff, and Jianxiong Xiao. LSUN: construction of a large-scale image dataset using deep learning with humans in the loop. CoRR, abs/1506.03365.. 2015.\nAlec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversaria1 networks. CoRR, abs/1511.06434, 2015.\nRuslan Salakhutdinov and Iain Murray. On the quantitative analysis of deep belief networks. In In\n(a) (b)\nFigure 8: Reconstructions for (a) LSUN Bedrooms and (b) 6464 ImageNet. Left-most columns are images from the test set, and the following 5 columns are top-down generations from the highest level of latent variables. We see that the reconstructions capture high-level semantic properties of the original images while varying in most of the details. We also visualized similar reconstructions by generations from the lower level of latent variables, and in this case the reconstructions were visually indistinguishable from the original images.\nN5 90W 79 7 0190107 M 7.0 NO 4 9M 3 &\nFigure 9: Samples from a PixelVAE with a receptive field of 7 pixels (left), a PixelCNN with ar 11-pixel receptive field (middle; roughly the same computational complexity as the PixelVAE), anc a PixelCNN with a 7-pixel receptive field (right)\nFigure 10: Reconstructions from the MNIST test set. Alternate columns are original (left) an reconstructed images (right).\n# 9 4 3 M M - 4\nFigure 11: More examples for visualizations of the variation in image features captured at different levels of stochasticity. Holding the other levels constant, we vary only the top-level sampling noise. (top), only the middle-level noise (middle), and only the bottom (pixel-level) noise (bottom).."}, {"section_index": "12", "section_name": "E.1 MNIST", "section_text": "For our quantitative MNIST experiments, the architectures of our encoder and decoder are as fol-. lows. Unless otherwise specified, all convolutional layers use ReLU nonlinearity. We also make an open-source implementation of this model available at https : / /github. com/igu1222/.\nEncoder x -> (, ) Kernel size Stride Output channels Convolution 3x3 1 32 Convolution 3x3 2 32 Convolution 3x3 1 32 Convolution 3x3 2 64 Pad 77 feature maps to 88 Convolution 3x3 1 64 Convolution 3x3 2 64 Convolution 3x3 1 64 Convolution 3x3 1 64 Convolution 3x3 1 64 Flatten Linear 2 latent dimensionality\nThe LSUN and ImageNet models use the same architecture: all encoders and decoders are residual. networks; we use pre-activation residual blocks with two 3 3 convolutional layers each and ELU. nonlinearity. Some residual blocks perform downsampling, using a 2 2 stride in the second con-. volutional layer, or upsampling, using subpixel convolution in the first convolutional layer. Weight. normalization is used in masked convolutional layers; in all other layers, batch normalization is used. We optimize using Adam with learning rate 1e-3. Training proceeds for 400K iterations using. batch size 48.\nDecoder z -> x Kernel size Stride Output channels Linear 4 x 4 x 64 Reshape to (64, 4, 4) Convolution 3x3 1 64 Convolution 3x3 1 64 Transposed convolution 3x3 2 64 Convolution 3x3 1 64 Crop 88 feature maps to 77 Transposed convolution 3x3 2 32 Convolution 3x3 1 32 Transposed convolution 3x3 2 32 Convolution 3x3 1 32 PixelCNN gated residual block 7x7 1 32 PixelCNN gated residual block(s) [ 5x5 ] x N 1 32 PixelCNN gated convolution 1x1 1 32 PixelCNN gated convolution 1x1 1 32 Convolution 1x1 1 1\nBottom-level Decoder z1 -> x\nBottom-level Encoder x -> h1 Kernel size. Resample Output channels Embedding 48 Convolution 1x1 192 Residual block. [ 3x3 1 x 2 192 Residual block. [ 3x3 1 x 2 Down x2 256 Residual block. [ 3x3 ] x 2 256 Residual block. [ 3x3 ] x 2 Down x2 512 Residual block. [ 3x3 ] x 2 512 Residual block. [ 3x3 1 x 2 512 Residual block. [ 3x3 ] x 2 512\nBottom-level Decoder z1 -> x Kernel size Resample Output channels Convolution 1x1 1 512 Residual block [3x3] x 2 512 1 Residual block [3x3] x 2 512 Residual block [3x3] x 2 512 Residual block [3x3 ] x 2 Up x2 256 Residual block [3x3 ] x 2 256 Residual block [3x3 ] x 2 Up x2 192 Residual block [3x3 ] x 2 192 Embedding 48 PixelCNN gated residual block [3x3 ] x 2 384 PixelCNN gated residual block [3x3] x 2 384 PixelCNN gated residual block [3x3] x 2 384\nOP-ICVC 12 Kernel size. Resample Output channels. Residual block. [ 3x3 1 x 2 512 Residual block. [ 3x3 ] x 2 512 Residual block. [ 3x3 1 x 2 Down x2 512 Residual block. [3x3 1 x 2 512 Residual block. [ 3x3 ] x 2 512 Residual block. [ 3x3 ] x 2 Down x2 512 Residual block. [ 3x3 1 x 2 512 Residual block. [ 3x3 ] x 2 512\nTop-level Decoder Z2 -> Z1 Kernel size Resample Output channels Linear 4x4x512 Reshape to (512, 4, 4) Residual block. [3x3 ] x 2 512 Residual block. [3x3] x 2 512 Residual block. [3x3 ] x 2 Up x2 512 Residual block. [3x3 ] x 2 512 Residual block. [3x3] x 2 512 Residual block. [3x3] x 2 Up x2 512 Residual block. [3x3 ] x 2 512 Residual block. [3x3 ] x 2 512 PixelCNN convolution 5x5 512 PixelCNN gated residual block [3x3 ] x 2 512 PixelCNN gated residual block [3x3 ] x 2 512 PixelCNN gated residual block [3x3 ] x 2 512 Convolution 1x1 256"}] |
SJZAb5cel | [{"section_index": "0", "section_name": "A JOINT MANY-TASK MODEL: GROWING A NEURAI NETWORK FOR MULTIPLE NLP TASKS", "section_text": "Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka & Richard Socher\nTransfer and multi-task learning have traditionally focused on either a single source-target pair or very few, similar tasks. Ideally, the linguistic levels of mor-. phology, syntax and semantics would benefit each other by being trained in a. single model. We introduce such a joint many-task model together with a strategy. for successively growing its depth to solve increasingly complex tasks. All lay-. ers include shortcut connections to both word representations and lower-level task. predictions. We use a simple regularization term to allow for optimizing all model. weights to improve one task's loss without exhibiting catastrophic interference. of the other tasks. Our single end-to-end trainable model obtains state-of-the-art. results on chunking, dependency parsing, semantic relatedness and textual entail-. ment. It also performs competitively on POS tagging. Our dependency parsing. layer relies only on a single feed-forward pass and does not require a beam search.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "In deep learning, unsupervised word vectors are useful representations and often used to initialize. recurrent neural networks for subsequent tasks (Pennington et al., 2014). However, not being jointly. trained, deep NLP models have yet shown benefits from predicting many (> 4) increasingly com-. plex linguistic tasks each at a successively deeper layer. Instead, existing models are often designec to predict different tasks either entirely separately or at the same depth (Collobert et al., 2011),. ignoring linguistic hierarchies.\nWe introduce a Joint Many-Task (JMT) model, outlined in Fig. 1, which predicts increasingly com plex NLP tasks at successively deeper layers. Unlike traditional NLP pipeline systems, our single JMT model can be trained end-to-end for POS tagging, chunking, dependency parsing, semantic relatedness, and textual entailment. We propose an adaptive training and regularization strategy to grow this model in its depth. With the help of this strategy we avoid catastrophic interference between tasks, and instead show that both lower and higher level tasks benefit from the joint train- ing. Our model is influenced by the observation of Sggaard & Goldberg (2016) who showed that predicting two different tasks is more accurate when performed in different layers than in the same layer (Collobert et al., 2011)."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "The potential for leveraging multiple levels of representation has been demonstrated in a variety of ways in the field of Natural Language Processing (NLP). For example, Part-Of-Speech (POS) tags are used to train syntactic parsers. The parsers are used to improve higher-level tasks, such as nat ural language inference (Chen et al., 2016), relation classification (Socher et al., 2012), sentiment analysis (Socher et al., 2013; Tai et al., 2015), or machine translation (Eriguchi et al., 2016). How ever, higher level tasks are not usually able to improve lower level tasks, often because systems are pipelines and not trained end-to-end.\nFigure 1: Overview of the joint many-task model predicting different linguistic outputs at succes sively deeper layers\nIn this section, we assume that the model is trained and describe its inference procedure. We begin at the lowest level and work our way to higher layers and more complex tasks..\nFor each word wt in the input sentence s of length L, we construct a representation by concatenating a word and a character embedding\nWord embeddings: We use Skip-gram (Mikolov et al., 2013) to train a word embedding matrix. which will be shared across all of the tasks. The words which are not included in the vocabulary ar mapped to a specia1 UNK token.\nCharacter n-gram embeddings: Character n-gram embeddings are learned using the same skip gram objective function as the word vectors. We construct the vocabulary of the character n-grams in the training data and assign an embedding for each character n-gram. The final character embedding is the average of the unique character n-gram embeddings of a word wt.' For example, the character n-grams (n = 1,2,3) of the word \"Cat\"' are{C, a, t, #BEGIN#C, Ca, at, t#END#, #BEGIN#Ca Cat, at#END#}, where \"#BEGIN#' and \"#END#' represent the beginning and the end of each word, respectively. The use of the character n-gram embeddings efficiently provides morphologica features and information about unknown words. The training procedure for the character n-grarr embeddings is described in Section 3.1, and for further details, please see Appendix A. Each word is subsequently represented as xt, the concatenation of its corresponding word and character vectors\nThe first layer of the model is a bi-directional LSTM (Graves & Schmidhuber. 2005: Hochreiter & Schmidhuber, 1997) whose hidden states are used to predict POS tags. We use the following Long. Short-Term Memory (LSTM) units for the forward direction:.\nit =(Wigt+bi)z ft=0(Wfgt+bf), Ot = o(Wogt+ bo) ut = tanh(Wugt + bu). Ct =it O Ut+ft O Ct-1, ht = 0t O tanh (ct)\nEntailment Entailment Entailment seennmte encoder encoder Relatedness Relatedness Relatedness encoder encoder lere DEP DEP re CHUNK CHUNK poom POS POS word representation word representation Sentence1 Sentence2\nit =(Wigt+bi), ft=0(Wfgt+bf), Ot =(Wogt+ bo), Ut = tanh(Wugt + bu), Ct =it O Ut+ ft O Ct-1, ht = Ot O tanh (ct) ,\nWieting et al. (2016) used a nonlinearity, but we have observed that the simple averaging also works well\nFigure 2: Overview of the POS tagging and chunking tasks in the first and second layers of the JM7 model.\nwhere we define the input gt as gt = [ h t-1; xt], i.e. the concatenation of the previous hidden state and the word representation of wt. The backward pass is expanded in the same way, but a different set of weights are used.\nFor predicting the POS tag of wt, we use the concatenation of the forward and backward states in a one-layer bi-LSTM layer corresponding to the t-th word: ht = [ h t; h t]. Then each ht (1 t L) is fed into a standard softmax classifier with a single ReLU layer which outputs the probability. vector y(1) for each of the POS tags."}, {"section_index": "3", "section_name": "2.3 WORD-LEVEL TASK: CHUNKING", "section_text": "Chunking is also a word-level classification task which assigns a chunking tag (B-NP, I-VP, etc for each word. The tag specifies the region of major phrases (or chunks) in the sentence.\ntag is assigned to wt, and l(j) is the corresponding label embedding. The probability values are automatically predicted by the POS layer working like a built-in POS tagger, and thus no gold POS tags are needed. This output embedding can be regarded as a similar feature to the K-best POS tag. feature which has been shown to be effective in syntactic tasks (Andor et al., 2016; Alberti et al. 2015). For predicting the chunking tags, we employ the same strategy as POS tagging by using the. single ReLU hidden layer before the classifier.."}, {"section_index": "4", "section_name": "2.4 SYNTACTIC TASK: DEPENDENCY PARSING", "section_text": "Dependency parsing identifies syntactic relationships (such as an adjective modifying a noun) be tween pairs of words in a sentence. We use the third bi-LSTM layer on top of the POS and chunk- ing layers to classify relationships between all pairs of words. The input vector for the LSTM includes hidden states, word representations, and the label embeddings for the two previous tasks: X fashion as the POS vector in Eq. (2). The POS and chunking tags are commonly used to improve dependency parsing (Attardi & DellOrletta, 2008).\nLike a sequential labeling task, we simply predict the parent node (head) for each word in the. sentence. Then a dependency label is predicted for each of the child-parent node pairs. To predict. the parent node of the t-th word wt, we define a matching function between w and the candidates\nChunking: POS Tagging: pos (pos) pos pos y(chk) y(chk) y.(chk) Y(chk) 91 92 93 9 label label label label label label label label embedding embedding embedding embedding embedding embedding embedding embedding softmax softmax softmax softmax softmax softmax softmax softmax 2 2 2 n 4 LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM + 1 pos X1 hj y1 1 pos x3 h3 1 X2 pos h x4 pos X1 X2 x3 x4 92 93 94\nS Tagging: (chk (chk) (chk) pos) (pos) (chk pos) y1 y2 y3 y2 y(pos) 94 label label label label label label label label embedding embedding embedding embedding embedding embedding embedding embedding (2) softmax softmax softmax softmax softmax softmax softmax softmax (1 + 4 LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM 4 pos X1 hj x2 h2 pos X3 h3 1 pos 'yi X4 h pos X1 X2 x3 x 4 Y2 93\n(pos) j|ht 1 j=1\nof the parent node as m (t. i) where Wa is a parameter matrix. For the root, we\nFigure 3: Overview of dependenc arsing in the third layer of the JMT model\nSemantic el relatedness:. label embedding softmax Feature extracton temporal temporal max-pooling max-pooling 4 LSTM LSTM LSTM LSTM LSTM LSTM 1h 3 pds chk yi Sentence1 Sentence2\nFigure 4: Overview of the semantic tasks in the top layers of the JMT model\nexp(m(t,j)) L+1 k=1,k=t exp(m (t, k))\nNext, the dependency labels are predicted using [ht) ; h as input to a standard softmax classifie with a single ReLU layer. At test time, we greedily select the parent node and the dependency labe for each word in the sentence.2 At training time, we use the gold child-parent pairs to train the labe predictor.\nThe next two tasks model the semantic relationships between two input sentences. The first task measures the semantic relatedness between two sentences. The output is a real-valued relatedness score for the input sentence pair. The second task is a textual entailment task, which requires one to determine whether a premise sentence entails a hypothesis sentence. There are typically three classes: entailment, contradiction, and neutral.\nThe two semantic tasks are closely related to each other. If the semantic relatedness between two sentences is very low, they are unlikely to entail each other. Based on this intuition and to make use of the information from lower layers, we use the fourth and fifth bi-LSTM layer for the relatedness and entailment task, respectively.\n2This method currently assumes that each word has only one parent node, but it can be expanded to handle multiple parent nodes, which leads to cyclic graphs..\nDependency Parsing: softmax softmax softmax 3 3 3 LSTM LSTM LSTM LSTM L1 pos 91 (chk) yi\nDependency Parsing: softmax softmax softmax 3 3) LSTM LSTM LSTM LSTM X1 2 (pos l(chk) yi\nsottmax A Feature extracton temporal temporal max-pooling max-pooling 4 LSTM LSTM LSTM LSTM LSTM LSTM L1 3 pos chk yi Sentence1 Sentence2\n(3) 1 = r as a parameterized vector. To compute the probability that w; (or the root node) is the parent of wt, the scores are normalized:.\nNow it is required to obtain the sentence-level representation rather than the word-level represen element-wise maximum values across all of the word-level representations in the fourth layer\nmax\nTo model the semantic relatedness between s and s', we follow Tai et al. (2015). The feature vector for representing the semantic relatedness is computed as follows:.\nWhere is the absolute values of the element-wise subtraction, and hg. element-wise multiplication. Both of them can be regarded as two different similarity metrics of the two vectors. Then dj(s, s') is fed into a softmax classifier with a single Maxout hidden. layer (Goodfellow et al., 2013) to output a relatedness score (from 1 to 5 in our case) for the sen-. tence pair.\nFor entailment classification between two sentences, we also use the max-pooling technique as ir the semantic relatedness task. To classify the premise-hypothesis pair (s, s) into one of the three classes, we compute the feature vector d2(s, s') as in Eq. (5) except that we do not use the abso. lute values of the element-wise subtraction, because we need to identify which is the premise (or. hypothesis). Then d2(s, s') is fed into a standard softmax classifier..\nThe model is trained jointly over all datasets. During each epoch, the optimization iterates over each full training dataset in the same order as the corresponding tasks described in the modeling section"}, {"section_index": "5", "section_name": "3.1 PRE-TRAINING WORD REPRESENTATIONS", "section_text": "We pre-train word embeddings using the Skip-gram model with negative sampling (Mikolov et al. 2013). We also pre-train the character n-gram embeddings using Skip-gram. The only difference is that each input word embedding in the Skip-gram model is replaced with its corresponding average embedding of the character n-gram embeddings described in Section 2.1. These embeddings are fine-tuned during the training of our JMT model. We denote the embedding parameters as Oe."}, {"section_index": "6", "section_name": "3.2 TRAINING THE POS LAYER", "section_text": "Ji(0pos) =->> log p + Al|Wposl2 + s|[ee 0'll\n3This modification does not affect the LSTM transitions, and thus it is still possible to add other single sentence-level tasks on top of our model..\n11\nTo make use of the output from the relatedness layer directly, we use the label embeddings for the relatedness task. More concretely, we compute the class label embeddings for the semantic relatedness task similar to Eq. (2). The final feature vectors that are concatenated and fed into the entailment classifier are the weighted relatedness label embedding and the feature vector d2(s, s').3 We use three Maxout hidden layers before the classifier.\nLet Opos = (Wpos, bpos, 0e) denote the set of model parameters associated with the POS layer where Wpos is the set of the weight matrices in the first bi-LSTM and the classifier, and bpos is the set of the bias vectors. The objective function to optimize Opos is defined as follows:.\nWe call the second regularization term o||0e - 0' ||2 a successive regularization term. The successive. regularization is based on the idea that we do not want the model to forget the information learned for the other tasks. In the case of POS tagging, the regularization is applied to Oe, and 0' is the. embedding parameter after training the final task in the top-most layer at the previous training epoch.. 8 is a hyperparameter.\nThe obiective function is defined as follows:\nJ2((chk) =->> log p(y d + X|Wchk||2 + S||0pOs 0pOsl] S t\nwhich is similar to that of POS tagging, and Ochk is (Wchk, bchk, Epos, 0e), where Wchk and bchk are the weight and bias parameters including those in Opos, and Epos is the set of the POS label embeddings. 0'pos is the one after training the POS layer at the current training epoch.\nThe objective function is defined as follows.\nFollowing Tai et al. (2015), the objective function is defined as follows.\nJ4(0re1) = KL + X||Wre1|2 + 8||Odep - 0' (s,s')\nThe obiective function is defined as follows:\nJ5((ent) =-> log p(y + A|| Went ||2 + S||Orel - 0rel (s,s')"}, {"section_index": "7", "section_name": "4 RELATED WORK", "section_text": "Many deep learning approaches have proven to be effective in a variety of NLP tasks and are becom ing more and more complex. They are typically designed to handle single tasks, or some of then are designed as general-purpose models (Kumar et al., 2016; Sutskever et al., 2014) but applied tc different tasks independently.\nJ3(0dep) =-1ogp(a|h)p(B|h),nQ)+A(l|Waep|l2+|Wa|l2)+0||0chk-0chkll2, (8 S\nJ5(0ent) = - log p + A|Went|l2 + o|0re1 - 0re1|2, (10) (s, s') where p( premise-hypothesis pair (s, s'). Dent is defined as (Went, bent, Epos, Echk, Erel, 0e), where Erel is the set of the relatedness label embeddings\nFor handling multiple NLP tasks, multi-task learning models with deep neural networks have been. proposed (Collobert et al., 2011; Luong et al., 2016), and more recently Spgaard & Goldberg (2016). have suggested that using different layers for different tasks is more effective than using the same. layer in jointly learning closely-related tasks, such as POS tagging and chunking. However, the. number of tasks was limited or they have very similar task settings like word-level tagging, and it. was not clear how lower-level tasks could be also improved by combining higher-level tasks\nIn the field of computer vision, some transfer and multi-task learning approaches have also been proposed (Li & Hoiem, 2016; Misra et al., 2016). For example, Misra et al. (2016) proposed a. multi-task learning model to handle different tasks. However, they assume that each data sample has. annotations for the different tasks, and do not explicitly consider task hierarchies.."}, {"section_index": "8", "section_name": "5.1 DATASETS", "section_text": "POS tagging: To train the POS tagging layer, we used the Wall Street Journal (WSJ) portion of Penn Treebank, and followed the standard split for the training (Section 0-18), development (Section 19. 21), and test (Section 22-24) sets. The evaluation metric is the word-level accuracy.\nDependency parsing: We also used the WsJ corpus for dependency parsing, and followed the. standard split for the training (Section 2-21), development (Section 22), and test (Section 23) sets. We converted the treebank data to Stanford style dependencies using the version 3.3.0 of the Stan- ford converter. The evaluation metrics are the Unlabeled Attachment Score (UAS) and the Labeled Attachment Score (LAS), and punctuations are excluded for the evaluation..\nSemantic relatedness: For the semantic relatedness task, we used the SiCK dataset (Marelli et al., 2014), and followed the standard split for the training (sICK_train.txt), developmen. (sICK_trial.txt), and test (sICK_test_annotated.txt) sets. The evaluation metric is the Mean Squared Error (MSE) between the gold and predicted scores..\nTextual entailment: For textual entailment, we also used the SiCK dataset and exactly the sam data split as the semantic relatedness dataset. The evaluation metric is the accuracy.\nPre-training embeddings: We used the word2vec toolkit to pre-train the word embeddings. We created our training corpus by selecting lowercased English Wikipedia text and obtained 100. dimensional Skip-gram word embeddings trained with the context window size 1, the negative sam- pling method (15 negative samples), and the sub-sampling method (10-5 of the sub-sampling co-. efficient).4 We also pre-trained the character n-gram embeddings using the same parameter set- tings with the case-sensitive Wikipedia text. We trained the character n-gram embeddings for n = 1, 2, 3, 4 in the pre-training step.\n4It is empirically known that such a small window size in leads to better results on syntactic tasks than larg window sizes. Moreover, we have found that such word embeddings work well even on the semantic tasks.\nRecently, Rusu et al. (2016) have proposed a progressive neural network model to handle multiple. reinforcement learning tasks, such as Atari games. Like our JMT model, their model is also suc-. cessively trained according to different tasks using different layers called columns in their paper. In. their model, once the first task is completed, the model parameters for the first task are fixed, and then the second task is handled by adding new model parameters. Therefore, accuracy of the previ-. ously trained tasks is never improved. In NLP tasks, multi-task learning has the potential to improve. not only higher-level tasks, but also lower-level tasks. Rather than fixing the pre-trained model pa rameters, our successive regularization allows our model to continuously train the lower-level tasks without significant accuracy drops.\nChunking: For chunking, we also used the WsJ corpus, and followed the standard split for the training (Section 15-18) and test (Section 20) sets as in the CoNLL 2000 shared task. We used Section 19 as the development set, following Sggaard & Goldberg (2016), and employed the IOBES tagging scheme. The evaluation metric is the F1 score defined in the shared task..\nEmbedding initialization: We used the pre-trained word embeddings to initialize the word embed. dings, and the word vocabulary was built based on the training data of the five tasks. All word. in the training data were included in the word vocabulary, and we employed the word-dropou. method (Kiperwasser & Goldberg, 2016) to train the word embedding for the unknown words. W. also built the character n-gram vocabulary for n = 2, 3, 4, following Wieting et al. (2016), and th. character n-gram embeddings were initialized with the pre-trained embeddings. All of the labe. embeddings were initialized with uniform random values in -/6/(dim + C), /6/(dim + C). where dim = 100 is the dimensionality of the label embeddings and C is the number of labels\nOptimization: At each epoch, we trained our model in the order of POS tagging, chunking, depen. dency parsing, semantic relatedness, and textual entailment. We used mini-batch stochastic gradient decent to train our model. The mini-batch size was set to 25 for POS tagging, chunking, and the. SICK tasks, and 15 for dependency parsing. We used a gradient clipping strategy with growing clip. ping values for the different tasks; concretely, we employed the simple function: min(3.0, depth). where depth is the number of bi-LSTM layers involved in each task, and 3.0 is the maximum value is the hyperparameter to decrease the learning rate. We set e to 1.0 and p to 0.3. At each epoch, the. same learning rate was shared across all of the tasks..\nRegularization: We set the regularization coefficient to 10-6 for the LSTM weight matrices, 10-5 for the weight matrices in the classifiers, and 10-3 for the successive regularization term excluding. the classifier parameters of the lower-level tasks, respectively. The successive regularization coeffi cient for the classifier parameters was set to 10-2. We also used dropout (Hinton et al., 2012). The. dropout rate was set to O.2 for the vertical connections in the multi-layer bi-LSTMs (Pham et al.. 2014), the word representations and the label embeddings of the entailment layer, and the classi fier of the POS tagging, chunking, dependency parsing, and entailment. A different dropout rate. of O.4 was used for the word representations and the label embeddings of the POS, chunking, anc. dependency layers, and the classifier of the relatedness layer..\nTable 1 shows our results of the test sets on the five different tasks.5 The column \"Single\"' show.. the results of handling each task separately using single-layer bi-LSTMs, and the column \"JMTall. shows the results of our JMT model. The single task settings only use the annotations of their owr. tasks. For example, when treating dependency parsing as a single task, the POS and chunking tag. are not used. We can see that all results of the five different tasks are improved in our JMT model. which shows that our JMT model can handle the five different tasks in a single model. Our JM7. model allows us to access arbitrary information learned from the different tasks. If we want to us the model just as a POS tagger, we can use the output from the first bi-LSTM layer. The output car. be the weighted POS label embeddings as well as the discrete POS tags..\nTable 1 also shows the results of three subsets of the different tasks. For example, in the case oi \"JMTABc\", only the first three layers of the bi-LSTMs are used to handle the three tasks. In the case of \"JMTpE\", only the top two layers are used just as a two-layer bi-LSTM by omitting all information from the first three layers. The results of the closely-related tasks show that our JMT model improves not only the high-level tasks, but also the low-level tasks.\n5The development and test sentences of the chunking dataset are included in the dependency parsing dataset although our model does not explicitly use the chunking annotations of the development and test data. In sucl cases, we show the results in parentheses.\nWeight initialization: The dimensionality of the hidden layers in the bi-LSTMs was set to 100. We initialized all of the softmax parameters and bias vectors, except for the forget biases in the LSTMs, with zeros, and the weight matrix Wa and the root node vector r for dependency parsing were also initialized with zeros. All of the forget biases were initialized with ones. The other weight matrices were initialized with uniform random values in [-/6/(row + col), /6/(row + col)], where row and col are the number of rows and columns of the matrices, respectively.\nTable 1: Test set results for the five tasks. In the relatedness task, the lower scores are better\nTable 3: Chunking results\nTable 2: POS tagging results\nTable 5: Semantic relatedness results"}, {"section_index": "9", "section_name": "6.2 COMPARISON WITH PUBLISHED RESULTS", "section_text": "Chunking: Table 3 shows the results of chunking, and our JMT model achieves the state-of-the-ai result. Sggaard & Goldberg (2016) proposed to jointly learn POS tagging and chunking in differer ayers, but they only showed improvement for chunking. By contrast, our results show that th low-level tasks are also improved by the joint learning.\nDependency parsing: Table 4 shows the results of dependency parsing by using only the WSJ corpus in terms of the dependency annotations, and our JMT model achieves the state-of-the-art result.6 It is notable that our simple greedy dependency parser outperforms the previous state-of- the-art result which is based on beam search with global information. The result suggests that the bi-LSTMs efficiently capture global information necessary for dependency parsing. Moreover, our single task result already achieves high accuracy without the POS and chunking information. Further analysis on our dependency parser can be found in Appendix B.\nSemantic relatedness: Table 5 shows the results of the semantic relatedness task, and our JMT model achieves the state-of-the-art result. The result of \"JMTpE\" is already better than the previous state-of-the-art results. Both of Zhou et al. (2016) and Tai et al. (2015) explicitly used syntactic tree structures. and Zhou et al. (2016) relied on attention mechanisms. However. our method use. the simple max-pooling strategy, which suggests that it is worth investigating such simple methods before developing complex methods for simple tasks. Currently, our JMT model does not explicitly use the learned dependency structures, and thus the explicit use of the output from the dependency layer should be an interesting direction of future work.\n6Choe & Charniak (2016) employed the tri-training technique to expand the training data with automatically-generated 400,000 trees in addition to the WSJ data, and they reported 95.9 UAS and 94.1 LAS\nTable 4: Dependency results\nTable 6: Textual entailment results\nPOS tagging: Table 2 shows the results of POS tagging, and our JMT model achieves the score close to the state-of-the-art results. The best result to date has been achieved by Ling et al. (2015) which uses character-based LSTMs. Incorporating the character-based encoders into our JMT model would be an interesting direction, but we have shown that the simple pre-trained character n-gram embeddings lead to the promising result.\nTextual entailment: Table 6 shows the results of textual entailment, and our JMT model achieves the state-of-the-art result.7 The previous state-of-the-art result in Yin et al. (2016) relied on attention. mechanisms and dataset-specific data pre-processing and features. Again, our simple max-pooling. strategy achieves the state-of-the-art result boosted by the joint training. These results show the. importance of jointly handling related tasks. Error analysis can be found in Appendix C..\nHere, we first investigate the effects of using deeper layers for the five different single tasks. We. then show the effectiveness of our training strategy: the successive regularization, the shortcut con-. nections of the word representations, the embeddings of the output labels, the character n-gram. embeddings, the use of the different layers for the different tasks, and the vertical connections of multi-layer bi-LSTMs. All of the results shown in this section are the development set results..\nDepth: The single task settings shown in Table 1 are obtained y using single layer bi-LSTMs, but in our JMT model, the igher-level tasks use successively deeper layers. To investigate he gap between the different number of the layers for each task, ve also show the results of using multi-layer bi-LSTMs for the single task settings, in the column of \"Single+'\" in Table 7. More concretely, we use the same number of the layers with our JMT nodel; for example, three layers are used for dependency pars- ng, and five layers are used for textual entailment. As shown in S these results, deeper layers do not always lead to better results, and the joint learning is more important than making the models cor\nSuccessive regularization: In Table 8. the column of \"w/o SR'. shows the results of omitting the successive regularization terms described in Section 3. We can see that the accuracy of chunking s improved by the successive regularization, while other results. are not affected so much. The chunking dataset used here is rel- atively small compared with other low-level tasks, POS tagging. and dependency parsing. Thus, these results suggest that the suc-. cessive regularization is effective when dataset sizes are imbal anced.\n- Shortcut connections: Our JMT model feeds the word rep-. resentations into all of the bi-LSTM layers, which is called the. shortcut connection. Table 9 shows the results of \"JMTall\" with. and without the shortcut connections. The results without the shortcut connections are shown in the column of \"w/o SC'. These. results clearly show that the importance of the shortcut connec-. tions in our JMT model, and in particular, the semantic tasks in. the higher layers strongly rely on the shortcut connections. That. is, simply stacking the LSTM layers is not sufficient to handle a. variety of NLP tasks in a single model. In Appendix D, we show. how the shared word representations change according to each tas\nOutput label embeddings: Table 10 shows the results without using the output labels of the POS. chunking, and relatedness layers, in the column of w/o LE''. These results show that the explicit use of the output information from the classifiers of the lower layers is important in our JMT model. The re- sults in the column of \"w/o SC&LE' are the ones without both of the shortcut connections and the la- bel embeddings.\n7The result of \"JMTal1\" is slightly worse than that of \"JMTDE\", but the difference is not significant becaus the training data is small.\nSingle Single+ POS 97.52 Chunking 95.65 96.08 Dependency UAS 93.38 93.88 Dependency LAS 91.37 91.83 Relatedness 0.239 0.665 Entailment 83.8 66.4\nJMTall w/o SR POS 97.88 97.85 Chunking 97.59 97.13 Dependency UAS 94.51 94.46 Dependency LAS 92.60 92.57 Relatedness 0.236 0.239 Entailment 84.6 84.2\nTable 8: Effectiveness of the Successive Regularization (SR)\nJMTall w/o SC POS 97.88 97.79 Chunking 97.59 97.08 Dependency UAS 94.51 94.52 Dependency LAS 92.60 92.62 Relatedness 0.236 0.698 Entailment 84.6 75.0\nTable 9: Effectiveness of the Shortcut Connections (SC)\nTable 10: Effectiveness of the Label Embed dings (LE).\nCharacter n-gram embeddings: Table 11 shows the results Sing for the three single tasks, POS tagging, chunking, and depen- POS Chu dency parsing, with and without the pre-trained character n-gram Dep mbeddings. The column of \"W&C' corresponds to using both Dep of the word and character n-gram embeddings, and that of \"Only Tabl W\" corresponds to using only the word embeddings. These re- char sults clearly show that jointly using the pre-trained word and haracter n-gram embeddings is helpful in improving the results The pre-training of the character n-gram embeddings is also effectiv ore-training, the POS accuracy drops from 97.52% to 97.38% and th rom 95.65% to 95.14%, but they are still better than those of using wo Further analysis can be found in Appendix A.\nDifferent layers for different tasks: Table 12 shows the results for the three tasks of our \"JMTABC\" setting and that of not using the short- POS Chunking cut connections and the label embeddings as in Ta- Dependency ble 10. In addition, in the column of \"All-3', we Dependency show the results of using the highest (i.e., the third) Table 12: layer for all of the three tasks without any shortcut layers for connections and label embeddings, and thus the two settings \"w/o SC&LE' and \"All-3' require exactly the same number of the model parameters. The results show that different tasks hampers the effectiveness of our JMT model, an more important than the number of the model parameters.\n-Vertical connections: Finally, we investigated our JMT results without using the vertical connections in the five-layer bi-LSTMs More concretely, when constructing the input vectors gt, we do not use the bi-LSTM hidden states of the previous layers. Ta- ble 13 shows the JMTau results with and without the vertical connections. As shown in the column of \"w/o VC', we observed the competitive results. Therefore, in the target tasks used in our model, sharing the word representations and the output label em beddings is more effective than just stacking the bi-LSTM layers"}, {"section_index": "10", "section_name": "7 CONCLUSION", "section_text": "We presented a joint many-task model to handle a variety of NLP tasks with growing depth of layers in a single end-to-end deep model. Our model is successively trained by considering linguistic hierarchies, directly connecting word representations to all layers, explicitly using predictions in lower tasks, and applying successive regularization. In our experiments on five different types of NLP tasks, our single model achieves the state-of-the-art results on chunking, dependency parsing semantic relatedness, and textual entailment."}, {"section_index": "11", "section_name": "ACKNOWLEDGMENTS", "section_text": "We thank the Salesforce Research team members for their fruitful comments and discussions"}, {"section_index": "12", "section_name": "REFERENCES", "section_text": "Daniel Andor. Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev. Slav Petrov, and Michael Collins. Globally Normalized Transition-Based Neural Networks. Ir Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 2442-2452, 2016.\nTable 11: Effectiveness of the character n-gram embeddings\nTable 12: Effectiveness of using different layers for different tasks..\nTable 13: Effectiveness of the Vertical Connections (VC).\nQian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, and Hui Jiang. Enhancing and Combining Se quential and Tree LSTM for Natural Language Inference. CoRR, abs/1609.06038, 2016.\nChris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. Transition Based Dependency Parsing with Stack Long Short-Term Memory. In Proceedings of the 53r Annual Meeting of the Association for Computational Linguistics and the 7th International Join Conference on Natural Language Processing (Volume 1: Long Papers). pp. 334-343. 2015\nAkiko Eriguchi, Kazuma Hashimoto, and Yoshimasa Tsuruoka. Tree-to-Sequence Attentional Neu ral Machine Translation. In Proceedings of the 54th Annual Meeting of the Association for Com putational Linguistics (Volume 1: Long Papers), pp. 823-833, 2016.\nAnkit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victoi Zhong, Romain Paulus, and Richard Socher. Ask Me Anything: Dynamic Memory Networks for Natural Language Processing. In Proceedings of The 33rd International Conference on Machine Learning, pp. 1378-1387, 2016.\nAlice Lai and Julia Hockenmaier. Illinois-LH: A Denotational and Distributional Approach to Se mantics. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pp. 329-334, 2014.\nZhizhong Li and Derek Hoiem. Learning without Forgetting. CoRR, abs/1606.09282, 2016\nWang Ling, Chris Dyer, Alan W Black, Isabel Trancoso, Ramon Fermandez, Silvio Amir, Luis Marujo, and Tiago Luis. Finding Function in Form: Compositional Character Models for Oper Vocabulary Word Representation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 1520-1530, 2015.\nMinh-Thang Luong, Ilya Sutskever, Quoc V. Le, Oriol Vinyals, and Lukasz Kaiser. Multi-task Sequence to Sequence Learning. In Proceedings of the 4th International Conference on Learning Representations, 2016.\nBernd Bohnet. Top Accuracy and Fast Dependency Parsing is not a Contradiction. In Proceedings of the 23rd International Conference on Computational Linguistics, pp. 89-97, 2010\nAlex Graves and Jurgen Schmidhuber. Framewise Phoneme Classification with Bidirectional LSTM and Other Neural Network Architectures. Neural Networks, 18(5):602-610, 2005.\nXuezhe Ma and Eduard Hovy. End-to-end Sequence Labeling via Bi-directional LSTM-CNNs CRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistic. (Volume 1: Long Papers), pp. 1064-1074, 2016.\nMarco Marelli. Luisa Bentivogli. Marco Baroni, Raffaella Bernardi, Stefano Menini, and Roberto Zamparelli. SemEval-2014 Task 1: Evaluation of Compositional Distributional Semantic Models on Full Sentences through Semantic Relatedness and Textual Entailment. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pp. 1-8, 2014.\nTomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed Represen tations of Words and Phrases and their Compositionality. In Advances in Neural Information Processing Systems 26. pp. 3111-3119. 2013.\nIshan Misra, Abhinav Shrivastava, Abhinav Gupta, and Martial Hebert. Cross-stitch Networks for Multi-task Learning. CoRR, abs/1604.03539, 2016.\nMasataka Ono, Makoto Miwa, and Yutaka Sasaki. Word Embedding-based Antonym Detection using Thesauri and Distributional Information. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 984-989, 2015.\nAndrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Ko ray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive Neural Networks. CoRR abs/1606.04671, 2016\nAnders Sggaard and Yoav Goldberg. Deep multi-task learning with low level tasks supervised at. lower layers. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 231-235, 2016.\nIlya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to Sequence Learning with Neural Net works. In Advances in Neural Information Processing Svstems 27. pp. 3104-3112. 2014\nJun Suzuki and Hideki Isozaki. Semi-Supervised Sequential Labeling and Segmentation Using Giga-Word Scale Unlabeled Data. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pp. 665-673, 2008.\nVu Pham, Theodore Bluche, Christopher Kermorvant, and Jerome Louradour. Dropout improves Recurrent Neural Networks for Handwriting Recognition. CoRR, abs/1312.4569, 2014.\nRichard Socher, Brody Huval, Christopher D. Manning, and Andrew Y. Ng. Semantic Composi- tionality through Recursive Matrix- Vector Spaces. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pp. 1201-1211, 2012.\nKai Sheng Tai, Richard Socher, and Christopher D. Manning. Improved Semantic Representations. From Tree-Structured Long Short-Term Memory Networks. In Proceedings of the 53rd Annual. Meeting of the Association for Computational Linguistics and the 7th International Joint Confer ence on Natural Language Processing (Volume 1: Long Papers), pp. 1556-1566, 2015..\nKristina Toutanova, Dan Klein, Christopher D Manning, and Yoram Singer. Feature-Rich Part of-Speech Tagging with a Cyclic Dependency Network. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Compu tational Linguistics, pp. 173-180, 2003.\nDavid Weiss, Chris Alberti, Michael Collins, and Slav Petrov. Structured Training for Neural Net work Transition-Based Parsing. In Proceedings of the 53rd Annual Meeting of the Association. for Computational Linguistics and the 7th International Joint Conference on Natural Language. Processing (Volume 1: Long Papers), pp. 323-333, 2015."}, {"section_index": "13", "section_name": "4 DETAILS OF CHARACTER N-GRAM EMBEDDINGS", "section_text": "Here we first describe the pre-training process of the character n-gram embeddings in detail anc then show further analysis on the results in Table 11.."}, {"section_index": "14", "section_name": "A.1 PRE-TRAINING WITH SKIP-GRAM OBJECTIVE", "section_text": "We pre-train the character n-gram embeddings using the objective function of the Skip-gram model with negative sampling (Mikolov et al., 2013). We build the vocabulary of the character n-grams based on the training corpus, the case-sensitive English Wikipedia text. This is because such case- sensitive information is important in handling some types of words like named entities. Assuming that a word w has its corresponding K character n-grams {cn1, cn2, ..., cnk}, where any overlaps and unknown ones are removed. Then the word w is represented with an embedding vc(w) computed as follows:\nwhere v(cn) is the parameterized embedding of the character n-gram cni, and the computation of vc(w) is exactly the same as the one used in our JMT model explained in Section 2.1..\nThe remaining part of the pre-training process is the same as the original Skip-gram model. For. each word-context pair (w, w) in the training corpus, N negative context words are sampled, and the objective function is defined as follows:.\nN -logo(vc(w).u(w))->`logo(-vc(w)u(wi)) (w,w) i=1\nK 1 K cn: i=1\nTable 14: POS tagging scores on the development set with and without the character n-gram em-. beddings, focusing on accuracy for unknown words. The overall accuracy scores are taken from Table 11. There are 3,862 unknown words in the sentences of the development set..\nTable 15: Dependency parsing scores on the development set with and without the character n-gram embeddings, focusing on UAS and LAS for unknown words. The overall scores are taken from Table 11. There are 976 unknown words in the sentences of the development set"}, {"section_index": "15", "section_name": "A.2 EFFECTIVENESS ON UNKNOWN WORDS", "section_text": "One expectation from the use of the character n-gram embeddings is to better handle unknowr words. We verified this assumption in the single task setting for POS tagging, based on the results reported in Table 11. Table 14 shows that the joint use of the word and character n-gram embeddings. improves the score by about 19% in terms of the accuracy for unknown words..\nWe also show the results of the single task setting for dependency parsing in Table 15. Again, we. can see that using the character-level information is effective, and in particular, the improvement of the LAS score is large. These results suggest that it is better to use not only the word embeddings. but also the character n-gram embeddings by default. Recently, the joint use of word and character. information has proven to be effective in language modeling (Miyamoto & Cho, 2016), but just. using the simple character n-gram embeddings is fast and also effective.."}, {"section_index": "16", "section_name": "B ANALYSIS ON DEPENDENCY PARSING", "section_text": "Our dependency parser is based on the idea of predicting a head (or parent) for each word, and thu the parsing results do not always lead to correct trees. To inspect this aspect, we checked the parsin. results on the development set (1,700 sentences), using the \"JMTABc\"' setting.\nIn the dependency annotations used in this work, each sentence has only one root node, and we have found 11 sentences with multiple root nodes and 11 sentences with no root nodes in our parsing results. We show two examples below:.\nIn the example (a), the two boldfaced words \"counsels' and \"need' are predicted as child nodes of. the root node. and the underlined word \"counsels' is the correct one based on the gold annotations This example sentence (a) consists of multiple internal sentences, and our parser misunderstood that. both of the two verbs are the heads of the sentence..\nIn the example (b), none of the words is connected to the root node, and the correct child node of the root is the underlined word \"chairman\"'. Without the internal phrase \"who resigned... in September\"'\nwhere o(.) is the logistic sigmoid function, ~(w) is the weight vector for the context word w, and w; is a negative sample. It should be noted that the weight vectors for the context words are param eterized for the words without any character information.\n(a) Underneath the headline \" Diversification , \" it counsels , \" Based on the events of the past. week , all investors need to know their portfolios are balanced to help protect them against. the market 's volatility .'. (b) Mr. Eskandarian , who resigned his Della Femina post in September , becomes chairman and chief executive of Arnold ..\nthe example sentence (b) is very simple, but we have found that such a simplified sentence is still not parsed correctly. In many cases, verbs are linked to the root nodes, but sometimes other types of words like nouns can be the candidates. In our model, the single parameterized vector r is used to represent the root node for each sentence. Therefore, the results of the examples (a) and (b) suggest that it would be needed to capture various types of root nodes, and using sentence-dependent rool representations would lead to better results in future work..\nWe inspected the development set results on the semantic tasks using the \"JMTall\" setting. In ou model, the highest-level task is the textual entailment task. We show an example premise-hypothesi pair which is misclassified in our results:.\nPremise: \"A surfer is riding a big wave across dark green water\", anc\nThe predicted label is entailment, but the gold label is contradiction. This example i. very easy by focusing on the difference between the two words \"big' and \"small'. However, ou. model fails to correctly classify this example because there are few opportunities to learn the dif-. ference. Our model relies on the pre-trained word embeddings based on word co-occurrence statis tics (Mikolov et al., 2013), and it is widely known that such co-occurrence-based embeddings car rarely discriminate between antonyms and synonyms (Ono et al., 2015). Moreover, the other fou. tasks in our JMT model do not explicitly provide the opportunities to learn such semantic aspects. Even in the training data of the textual entailment task, we can find only one example to learn the difference between the two words, which is not enough to obtain generalization capacities. There. fore, it is worth investigating the explicit use of external dictionaries or the use of pre-trained worc embeddings learned with such dictionaries (Ono et al., 2015), to see whether our JMT model is further improved not only for the semantic tasks. but also for the low-level tasks."}, {"section_index": "17", "section_name": "D HOW DO SHARED EMBEDDINGS CHANGE", "section_text": "In our JMT model, the word and character n-gram embedding matrices are shared across all of. the five different tasks. To better qualitatively explain the importance of the shortcut connections. shown in Table 9, we inspected how the shared embeddings change when fed into the different bi- LSTM layers. More concretely, we checked closest neighbors in terms of the cosine similarity for the word representations before and after fed into the forward LSTM layers. In particular, we used the corresponding part of W in Eq. (1) to perform linear transformation of the input embeddings,. because ut directly affects the hidden states of the LSTMs. Thus, this is a context-independent. analysis.\nTable 16 shows the examples of the word \"standing\". The row of \"Embedding\" shows the cases of using the shared embeddings, and the others show the results of using the linear-transformed embeddings. In the column of \"Only word\"', the results of using only the word embeddings are shown. The closest neighbors in the case of \"Embedding\" capture the semantic similarity, but afte fed into the POS layer, the semantic similarity is almost washed out. This is not surprising because it is sufficient to cluster the words of the same POS tags: here, NN, VBG, etc. In the chunking layer the similarity in terms of verbs is captured, and this is because it is sufficient to identify the coarse chunking tags: here, vp. In the dependency layer, the closest neighbors are adverbs, gerunds of verbs, and nouns, and all of them can be child nodes of verbs in dependency trees. However, this information is not sufficient in further classifying the dependency labels. Then we can see that in the column of \"Word and char', jointly using the character n-gram embeddings adds the morphological information, and as shown in Table 11, the LAS score is substantially improved.\nIn the case of semantic tasks, the projected embeddings capture not only syntactic, but also semantic similarities. These results show that different tasks need different aspects of the word similarities and our JMT model efficiently transforms the shared embeddings for the different tasks by the simple linear transformation. Therefore. without the shortcut connections. the information about the word representations are fed into the semantic tasks after transformed in the lower layers where the\nskirting straddling contesting footing\nTable 16: Closest neighbors of the word \"standing\" in the embedding space and the projected spac in each forward LSTM.\nsemantic similarities are not always important. Indeed, the results of the semantic tasks are very poor without the shortcut connections..\nWord and char Only word leaning stood kneeling stands Embedding saluting sit clinging pillar railing cross-legged warning ladder waxing rc6280 POS dunking bethle proving warning tipping f-a-18 applauding fight disdaining favor Chunking pickin pick readjusting rejoin reclaiming answer guaranteeing patiently resting hugging Dependency grounding anxiously hanging resting hugging disappointment stood stood stands unchallenged Relatedness unchallenged stands notwithstanding beside judging exists nudging beside skirting stands Entailment straddling pillar contesting swung footing ovation"}] |
r1Chut9xl | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Deep latent Gaussian models (DLGMs. a.k.a. variational autoencoders;[Rezende et al.|2014} Kingma et al.[2014) have led a resurgence in the use of deep generative models for density estimation. DLGM. assume that observed vectors x are generated by applying a nonlinear transformation (defined by a neural network with parameters 0) to a vector of Gaussian random variables z.\nLearning in DLGMs proceeds by approximately maximizing the average marginal likelihood p(x) = S p(z)p(x|z)dz of the observations x. Computing the true marginal likelihood is intractable, sc we resort to variational expectation-maximization (Bishopl 2006), an approximation to maximum likelihood estimation. To learn the parameters 0 of the generative model, the procedure needs to find a distribution q(z[x) that approximates the posterior distribution p(z|x) of the latent vector z given th observations x. In the past, such q distributions were fit using iterative optimization procedures (e.g Hoffman et al.]2013). But Rezende et al.(2014) andKingma et al.(2014) showed that q(z[x) can be parameterized by a feedforward \"inference network' with parameters $, speeding up learning. This inference network is trained jointly with the generative model; as training proceeds, the inference network learns to approximate posterior inference on the generative model, and the generative mode improves itself using the output of the inference network.\nEmbedded within this procedure, however, lies a potential problem: both the inference network anc the generative model are initialized randomly. Early on in learning, the inference network's q(z|x distributions will be poor approximations to the true posterior p(z|x), and the gradients used to update the parameters of the generative model will therefore be poor approximations to the gradients o the true log-likelihood log p(x). Previous stochastic variational inference methods (Hoffman et al. 2013) were slower, but suffered less from this problem since for every data-point, a set of variationa parameters was optimized within the inner loop of learning. In this work, we investigate blending the two methodologies for learning models of sparse data. In particular, we use the parameters predicted by the inference network as an initialization and optimize them further during learning When modeling high-dimensional sparse data, we show that updating the local variational parameters yields generative models with better held-out likelihood, particularly for deeper generative models.\nWhat purpose is served by fitting bigger, deeper, more powerful generative models? Breiman(2001. argues that statistical discriminative modeling falls into two schools of thought: the data modeling culture and the algorithmic modeling culture. The former advocates the use of predictive models that assume interpretable, mechanistic processes while the latter advocates the use of black box techniques with an emphasis on prediction accuracy.Breiman s arguments also ring true about the."}, {"section_index": "1", "section_name": "INFERENCE & INTROSPECTION IN DEEP GENERATIVE MODELS OF SPARSE DATA", "section_text": "Matthew Hoffiman\nAdobe Research\nmatthoffm@adobe.com"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Deep generative models such as deep latent Gaussian models (DLGMs) are pow erful and popular density estimators. However, they have been applied almost exclusively to dense data such as images; DLGMs are rarely applied to sparse high-dimensional integer data such as word counts or product ratings. One reason is that the standard training procedures find poor local optima when applied to such data. We propose two techniques that alleviate this problem, significantly improving our ability to fit DLGMs to sparse, high-dimensional data. Having fit these models, we are faced with another challenge: how to use and interpret the representation that we have learned? To that end, we propose a method that extracts distributed representations of features via a simple linearization of the model.\ndivide between deep generative models with complex conditional distributions and simpler, more. interpretable statistical models. Consider a classic model such as Latent Dirichlet Allocation (Blei et al.| 2003). It is outperformed in held-out likelihood (Miao et al.]2016) by deeper generative models and assumes a simple probabilistic process for data generation that is unlikely to hold in reality. Yet its generative semantics lend it a distinct advantage: interpretability. The word-topic matrix in the model allows practitioners to read off what the model has learned about the data. Is there a natural way to interpret the generative model when the conditional distributions are parameterized by a deep neural network?\nOur second contribution is to introduce a simple, easy to implement method to interpret what is being learned by generative models such as DLGMs whose conditional probabilities are parameterized by deep neural networks. Our hope is to narrow the perceived gulf between a complex generative model's representational power and its interpretability. We use the Jacobian of the conditional distribution with respect to latent variables in the Bayesian network to form embeddings (or Jacobian vectors) ol the observations. We investigate the properties of the Jacobian vectors obtained from deeper, more non-linear generative models."}, {"section_index": "3", "section_name": "2 BACKGROUND", "section_text": "Generative Model: We consider learning in generative models of the form shown in Figure[1] We observe a set of D word count vectors x1:D, where xdv denotes the number of times that word index. v E {1,..., V} appears in document d. We assume we are given the total number of words per. document Nd = , xdv, and that xd was generated via the following generative process:.\nThat is, we draw a Gaussian random vector, pass it through a multilayer perceptron (MLP) with parameters 0, pass the resulting vector through the softmax (a.k.a. multinomial logistic) function, and sample Ng times from the resulting distribution over the vocabulary.\nlog p(x; 0) E logpe(x[z))]- KL(qs(z[x)|[p(z)) = L(x;0,) q(z|x)\nWe leverage an inference network or recognition network (Hinton et al.l[1995), a neural network which approximates the intractable posterior, during learning. This is a parametric conditional distribution that is optimized to perform inference.Kingma & Welling(2014);Rezende et al.(2014) use a neural net (with parameters $) to parameterize qs([x). The challenge in the resulting optimization problem is that the lower bound (2) includes an expectation w.r.t. qo([x), which implicitly depends on the network parameters $. This difficulty is overcome by using stochastic backpropagation.\nN! to assuming that the words are observed in a particular order\nFigure 1: Deep Latent Gaussian Model: The Bayesian network depicted here comprises a single latent variable with the conditional probability p(x[z) defined by a deep neural network with parameter 0. The dotted line represents the inference network parameterized by $, which is used for posterior inference at train and test time.\nexp{y(zd)} d ~ N(0,I); y(zd) =MLP(zd;0); (zd) = ; xd ~ Multinomial((zd), Na) y exp{y(zd)v}\nVariational Learning: For ease of exposition notation we drop the subscript on xg to form x referring to a single data point. We need to approximate the intractable posterior distribution p(z[x). during learning. Using the well-known variational principle, we can obtain the lower bound on the log marginal likelihood of the data (or L(x; 0, )) in Eq.2. where the inequality is by Jensen's. inequality.\n{(x), (x)} the local variational parameters predicted by the inference network. A sim. ple transformation allows one to obtain unbiased Monte Carlo estimates of the gradients of Eq(z|x) [log pe(x|z))] with respect to . If we assume the prior p(z) is also normally distributed, the. KL and its gradients may be obtained analytically. Throughout this paper we will use 0 to denote the. parameters of the generative model, and to denote the parameters of the inference network..\nInference with Global Information: Sparse data typically exhibits long tails and learning in the presence of rare features is challenging. Inference networks learn to regress to the optimal posterior parameters for every data point and global information about the relative frequencies of the individual features in the training distribution may present valuable information during learning\nThe simplest way to incorporate first order statistics across the training data into the inferential proces is to condition on tf-idf (Baeza-Yates et al.][1999) features instead of the raw-counts. tf-idf is one o the most widely used techniques in information retrieval. In the context of building bag-of-word. representations for documents, tf-idf re-weight features to increase the influence of rarer word. while decreasing the influence of common words appearing in all documents. The tf-idf-transformec D is normalized by its L2 norm. It's worthwhile to note that leveraging first-order statistics for inference is difficult in the traditional paradigm of tracking variational parameters for each data point but is easy with inference networks.\nOptimizing Local Variational Parameters: The inference network initially comprises a randomly initialized neural network. The predictions of the inference network early in optimization are suboptimal variational parameters used to derive gradients of the parameters of the generative model This induces noise and bias to the gradients used to update the parameters of the generative model this noise and bias may push the generative model towards a poor local optimum. Previous work has suggested that deep neural networks (which form the conditional probability distributions pe(x[z) are sensitive to initialization (Glorot & Bengio]2010Larochelle et al.]2009).\nTo avoid these issues, we only use the local variational parameters (x) predicted by the inference network to initialize an iterative optimizer that maximizes the ELBO with respect to ; we use the optimized variational parameters (x) to derive gradients for the generative model. We then train the inference network using stochastic backpropagation and gradient descent, holding the parameters of the generative model 0 fixed. Our procedure is detailed in Algorithm|1\nAlgorithm 1 Pseudocode for Learning: We evaluate expectations in (x) (see Eq.2) using a single sample. from the variational distribution and aggregate gradients across mini-batches. M = 1 corresponds to performing. no additional optimization to the variational parameters We update 0, (x), using stochastic gradient descent with adaptive learning rates ne, n(x), n obtained via ADAM (Kingma & Ba2015)\n2. Estimate local variational parameters (x)1 using qo(z|x 3. Estimate (x)M ~ y(x) = arg maxy(x) L(x; 0; y(x)) via SGD as: m =1,...,M, y(x)m+1 = (x)m+ ny(x) 8L(x;0,y(x)m) dy(x) m 4. Update 0 as: 0 0+nee(x;0, (x)M) 5. Update $ as: $ $+ n(x; 0,(x)) 1whiled\nRegression: E[y[x] = Wx + b; Factor Analysis: x ~ N(0, I); E[yx= Wx+ b Latent Dirichlet Allocation: x ~ Dirichlet(a); E[y|x] = Wx.\nFor models as in Fig[1] the variability in the training data is assumed to be due to the single laten state z. The relationship between latent variables z and observations x cannot be quickly read of of the parameters 0. But we can still ask what happens if we perturb z by some small dz-thi. is simply the directional derivative Oz same way we would a factor loading matrix, with two main differences. First, the Jacobian matri. DLGMs exhibit rotational symmetry---the prior on z is rotationally symmetric, and the MLP ca. apply arbitrary rotations to z before applying any nonlinearities, so a priori there is no \"natural\"' set o oasis vectors for z. For a given Jacobian matrix, however, we can find the most significant direction via a singular value decomposition (SVD).\nJacobian Vectors: We present our method to generate embeddings from Bayesian networks of the form Figure[1 We consider three variants of Jacobian embedding vectors, based on the unnormalized potentials from the MLP, logarithmic probabilities, and linear probabilities respectively:.\n0y(z 0 log (z) d(z) og prob dz dz dz\n(4) d z For any z,{J(z)log,J(z)pot,J(z)prob} E RV K where K is the latent dimension and V is the. dimensionality of the observations. It is this matrix that we use to form embeddings. We denote by u;. the Jacobian vector obtained from the row of the Jacobian matrix. When not referring to a particular variant, we use J(z) to denote the Jacobian matrix. J(z) is a function of z leaving open the choice. of where to evaluate this function. The semantics of our generative model suggest a natural choice. Imean := Ep(z)[I(z)]. This set of embeddings captures the variation in the output distribution with. respect to the latent state across the prior distribution of the generative model. Additionally, one may also evaluate the Jacobian at the approximate posterior corresponding to an observation x. We study how this may be used to obtain contextual word-vectors..\nIn frameworks that support automatic differentiation (e.g., Theano; Theano Development Team 2016), J(z) is readily available and we estimate Imean via Monte-Carlo sampling from the prior.\nexp(yi(z)) and yiz)=wfz\nOJC and Yi(Z , exp(Yj(z)) For linear models, zyi(z) = w, directly corresponds to J(z)pot. Noting that Vz exp(y(z)) = exp(yi(z))Vz7i(z) and Vz, exp(7j(z)) =,exp(7j(z))Vz7j(z), we estimate J(z)prob as: exp(Yi(z)) ;exp(Yj(z))Vz exp(yi(z))-exp(yi(z))Vz , exp(7j(z)) Vzp(xi = 1z) = V ; exp(Yj(z)) (; exp(Yj(z)))2 ; exp(Yj(z)) exp(yi(z))wi-exp(yi(z)) , exp(7j(z))wj (; exp(Yj(z)))2 =p(xi=1|z)Wi-p(xi=1|z)p(xj=1|z)Wj =p(xi=1|z)(wi-p(xj =1|z)wj) Similarly, we may compute J(z)log: Vz log p(xi = 1|z) = Wi - p(xj=1|z)Wj=>`p(xj=1|z)(wi-Wj) (5) 4\nexp(Yj expYi~ expYi exp( Vzp(xi=1|z) = C; exp(Yj(z)) (; exp(Yj(z)))2 ; exp(yj(z)) exp(yi(z))wi-exp(yi(z)) , exp(7j(z))wj (j exp(7j(z)))2 p(xi=1|z)Wi-p(xi=1|z)>`p(xj=1|z)Wj =p(x=1|z)(wi->p(xj=1|z)wj)\nVzlogp(x=1z)=Wi-p(x=1|z)w=p(x=1|z)w-Wj\nIn each case, we need only inspect the parameter matrix W to answer the question \"what happens to y if we increase xk a little?\" The answer is clear---y moves in the direction of the kth row of W. dx answer is simply the parameter matrix W..\nDeriving Jacobian Vectors: For simplicity, we derive the functional form of the Jacobian in a linear model i.e where y(zd) = Wzd (c.f Eq|1). We drop the subscript d and denote by y(z), the ith element of the vector y(z).\nWhen we use a non-linear conditional probability distribution J(z)los becomes: log p(x, = 1|z) = ;p(x; = 1|z)(Vz7i(z) - VzYj(z)) where Vz7i(z) is a non-linear function of z. To the oest of our knowledge, Jacobian Vectors and their properties have not been studied."}, {"section_index": "4", "section_name": "4 RELATED WORK", "section_text": "Learning in Deep Generative Models: Salakhutdinov & Larochelle[(2010) optimize the local varia tional parameters obtained from an inference network when learning deep Boltzmann machines. Foi. DLGMs,[Hjelm et al.(2016) also consider the optimization of the local variational parameters, though. their exposition focuses on deriving an importance-sampling-based bound to use during learning in deep generative models with discrete latent variables. Their experimental results suggest the proce. dure does not improve performance much on the binarized MNIST dataset. This is consistent with our experience--we found that our secondary optimization procedure helped more when modeling sparse, high-dimensional text data than when modeling MNIST..\nLeveraging Gradient Information: The algorithmic procedure for obtaining Jacobian Vectors that we. propose resembles that used to derive Fisher Score features. For an data point X under a parameteric. distribution p(X; 0), the Fisher scores is defined as Ux = Ve logp(X; 0).Jaakkola & Haussler. (2007) similarly use Ux to form a kernel function for subsequent use in a discriminative classifier. The intuition behind such methods is to note that the derivative of the log-probability with respect to. the parameters of the generative model encodes all the variability in the input under the generative process. We rely on a related intuition, although our motivations are different; we are interested in. characterizing isolated features such as words, not vector observations such as documents. Also we consider Jacobians with respect to per-observation latent variables z, rather than globally shared. parameters 0.\nIn the context of discriminative modeling. (Erhan et al.[2009) use gradient information to study th patterns with which neurons are activated in a deep neural networks while (Wang et al.2016) use the. spectra of the Jacobian to study the complexity of the functions learned by neural networks..\nMiao et al.(2016) learn a shallow log-linear model on text data and obtain embeddings for words from. the weight matrix that parameterize their generative model.Li et al.(2016) propose a modification to. LDA that explicitly models representations for word in addition to modeling the word-topic structure.\nText Data: We study the effect of further optimization of the variational parameters and inference with tf-idf features on the two datasets of varying size: the smaller 20Newsgroups (Lang 2008) (train/valid/test: 10768/500/7505, V: 2000) and the larger RCV2 (Lewis et al.] 2004) dataset (train/valid/test: 789414/5000/10000, V: 10000). We follow the preprocessing procedure defined in (Miao et al.]2016) for both datasets. We also train models on the Wikipedia corpus used in (Huang et al.[2012). We remove stop words, words appearing less than ten times in the dataset. and select our vocabulary to comprise the union of the top 20000 words in the corpus, the words in the WordSim353\nDenoting a word-pair vector as w; - w;, where wi, w; are columns of the matrix W. If we define the. set of all word-pair vectors as S, then Eq5|captures the idea that the vector representation for a word. i lies in the convex hull of S. Furthermore, the word vector's location in CONV(S) is determined by the likelihood of the pairing word (x;) under the model p(x; = 1[z)..\nIntrospection via Embeddings: Landauer et al.(1998) proposed latent semantic analysis, one of the earliest works to create vector space representations of documents. Bengio et al. (2003); Mikolov & Dean (2016) propose log-linear models to create word-representations from document corpora in an unsupervised fashion.Rudolph et al. (2016) describe a family of models to create contextual embeddings where the conditional distributions that lie in the exponential family. Finally, Choi et al. (2016) propose a variant of Word2Vec to create representations of diagnosis codes from temporal Electronic Health Record data. The models above explicitly condition the probability of a word on its nearby context. In contrast, our model models the probability of a word as it appears in the document (or rather, conditioned on its global context). Augmenting the generative model in Figure1. to incorporate local context is a possible direction for future work.."}, {"section_index": "5", "section_name": "Finkelstein et al.] 2001) and the words in the Stanford Contextual Word Similarity Dataset (SCwS. (Huang et al.|2012). The resulting dataset is of size train/valid: 1212781/1000 and V:20253.", "section_text": "EHR Data: We train shallow and deep generative models on a dataset constructed from Electronic. Medical Records. The dataset comprises 185000 patients where each patient's data across time was. aggregated to create a bag-of-diagnosis-codes representation of the patient. The vocabulary comprises. four different kinds of medical diagnosis codes: ICD9 (diagnosis), LOINC (laboratory tests), NDC (prescription medication), CPT (procedures). For a single patient, we have 51321 diagnosis codes.\nTraining Procedure: On all datasets, we train shallow log-linear models (y(z) = W z + b) and. deeper three-layer DLGMs (y(z) = MLP(z; 0)). We vary the number of secondary optimization steps M = 1, 100 (cf. Algorithm|1) to study the effect of optimization on (x) with ADAM (Kingma & Bal2015). We use a mini-batch size of 500, a learning rate of 0.01 for (x) and 0.0008 for 0, The inference network was fixed to a two-layer MLP whose intermediate hidden layer h(x) was used to parameterize the mean and diagonal log-variance (x), log (x). To evaluate the quality of the. learned generative models, we report an upper bound on perplexity (Mnih & Gregor2014) given by. exp(- i N. log p(x;) where log p(x;) is replaced by Eq The notation 3-M100-tfidf indicates a model where the MLP parameterizing y(z) has three hidden layers, the local variational parameters. are updated 100 times before an update of 0 and tf-idf features were used in the inference network.\nImproving Learning: Table[1depicts our results on 2Onewsgroups and RCV2. On the smalle dataset, we find that the deeper models overfit quickly and are outperformed by shallow generative models. On the larger datasets, the deeper models' capacity is more readily utilized yielding bette generalization. The use of tf-idf features always helps learning on smaller datasets. On larger datasets the benefits are smaller when we also optimize (x). Finally, the optimization of the local variationa parameters appears to help most on the larger datasets. To investigate how this occurs, we plot the held-out likelihood versus epochs. For models trained on the larger RCV2 (Figure[2a) and Wikipedia (Figure|2b) datasets, the larger deep generative models converge to better solutions (and in fewe passes through the data) with the additional optimization of (x).\nIn Table 5lin the supplementary material, we study the effect of varying the parameters of the. inference network. There, we perform a small grid search over the hidden dimension and the number of layers in the inference network and find that optimizing the variational parameters continues to produces models with lower overall perplexity.\nJacobian Vectors: Our first avenue for introspection into the learned generative model is using. log-singular values of the Jacobian matrix. Since the Jacobian matrix precisely encodes how sensitive. the outputs are with respect to the inputs, the log-singular value spectrum of this matrix directly captures the amount of variance in the data explained by the latent space. Said differently, we can. read off the number of active units in the DLGM or VAE by counting the number of log-singular. values larger than zero. Furthermore, this method of introspection depends only on the parameters. of the generative model. In Figure [2d] 2e] we see that for larger models continuing to optimize. the variational parameters allows us to learn models that use many more of the available latent. dimensions. This suggests that, when fit to text data, DLGMs may be particularly susceptible to the. overpruning phenomenon noted byBurda et al.(2015). In Figure[2] the lower held-out perplexity and. the increased utilization of the latent space suggest that the continued optimization of the variational. parameters yields more powerful generative models..\nWe investigate how the Jacobian matrix may be used for model introspection by studying the\nTo study the effect of investigate where optimizing (x) is particularly effective on, we train a three layer model on different subsets of the Wikipedia dataset. The subsets are created by selecting the top K most frequently occurring features in the data. Our rationale is that by holding everything fixed and varying the level of sparsity (datasets with smaller values of K are less sparse) in the data, we can begin to understand when our method is most helpful. On held-out data, we compute the difference between the perplexity when the model is trained with M = 1 (denoted Pm1) and M = 100 (denoted Pm1oo) and compute the relative decrease in perplexity obtained as P-Pm100 PM 100. The results are depicted in Figure2c|where we see that our method improves learning as a function of the dimensionality of the data.\nTable 1: Test Perplexity: Left: Baselines Results on the 2Onewsgroups and RCV1-v2 dataset Legend: LDA. (Blei et al.|2003), Replicated Softmax (RSM) (Hinton & Salakhutdinov. 2009), Sigmoid Belief Networks (SBN) and Deep Autoregressive Networks (DARN) (Mnih & Gregor2014), Neural Variational Document Model. (Miao et al.2016). K denotes the latent dimension in our notation. Right: DLGMs on text data with K = 100. We vary the features presented to the inference network qo(z[x) during learning between: normalized count Vectors , denoted \"norm') and normalized tf-idf (denoted \"tf-idf'') features.\n20News RCV1-v2 Model K 20News RCV1-v2 DLGM M1 M100 M1 M11 LDA 50 1091 1437 1-M1-norm 964 816 498 479 LDA 200 1058 1142 1-M100-norm 1182 831 485 453 RSM 50 953 988 3-M1-norm 1040 866 408 360 SBN 50 909 784 3-M100-norm 1341 894 378 329 fDARN 50 917 724 1-M1-tfidf 895 785 475 453 fDARN 200 598 1-M100-tfidf 917 792 480 451 NVDM 50 836 563 3-M1-tfidf 1027 852 391 346 NVDM 200 852 550 3-M100-tfidf 1029 833 377 327 PM1PM100 PM1 letty 2200 .1-M1 0.12 3-M1 3-M1 1-M1 Per 700 p 2000 0.10 3-M100 Per 3-M100 1-M100 1800 no-p| 0.08 600 Hhl 1600 0.06 uo Peeeese 1400 0.04 1200 0.02 10 20 25 0.00 50 100 5 15 30 150 200 00T 3000 0000t 20234 Epochs Epochs (a) RCV2 (b) Wikipedia Sorted Number of Features (c) Perplexity vs Features 3-m1 1-M1 3-M1 1-m1 3-M100 1-M100 3-M100 1-M100 6 5 607 321 607 1 0 20 40 60 80 100 0 20 40 60 80 100 Number of singular values Number of singular values (d) RCV2 (e) Wikipedia\nin color. For the RCV2 and Wikipedia (large) datasets, we visualize the validation perplexity as a function of epochs. The solid lines indicate the validation perplexity for M = 1 and the dotted lines indicate M = 100. The x-axis is not directly comparable on running times since larger values of M take longer during training. We find that learning with M = 100 takes approximately 15 times as long per mini-batch of size 500 on the text datasets. Figure[2c|compares relative differences in the final held-out perplexity, denoted P, between models trained using M = 1 and M = 100. On the x-axis, we vary the number of features used in the dataset. Figure 2d\ntrained on the Wikipedia dataset. The neighbors are semantically sensible. Instead of evaluating the Jacobian at L points 1:L ~ p(z), one may instead evaluate it at z1:L ~ q(z|x) for some x. In Table. 2b] we select three polysemous query words alongside \"context words' that disambiguate the query's. meaning. For each word-context pair, we create a document comprising a subset of words in the the.\n(a) Word Embeddings (Nearest Neighbors): We visualize nearest neighbors of word embeddings. We exclude plurals of the query and other words in the neighborhood..\n(a) WordSim353: \"G denotes the model in Huang et al. learned only with global context in the document.\nTable 3: Semantic Similarity in Words: The baseline results are taken from (Huang et al.||2012). C&W uses embeddings from the language model of (Collobert & Weston2008). Glove corresponds to embeddings by Pennington et al.|2 2014). The learning algorithm for our embeddings does not use local context.\ncontext's Wikipedia page. Then, we use the learned inference network to perform posterior inference display the nearest neighbors for each word under different contextual Jacobian vectors and find that, while not always perfect, they capture different contextually relevant semantics. The take-away here is that by combining posterior inference in this Bayesian network with our methodology of introspecting the model, one obtains different context-specific representations for the observations despite not having been trained to capture this explicitly.\nIn Table [8b(appendix), we visualize clusters formed from the embeddings of medical diagnosis. codes to find that they exhibit topical coherence. In Table 2c] the nearest neighbors of drugs include other drugs prescribed in conjunction with or as a replacement for the query drug. For diagnosis. codes such as \"Asbestosis', the nearest neighbors are symptoms and procedures associated with the\n(b) SCwS: (S) denotes a single prototype. approach versus (M) that denotes a multi prototype approach (that leverages context).\nModels MRMNDF-RT MRMNDF-RT MRMcCs MRMccs (May Treat) (May Prevent) (Fine Grained) (Coarse Grained) (De Vine et al.[[2014) 53.21 57.14 22.63 24.56 (Choi et al. |2016) 59.40 55.71 44.80 47.43 SCUI 52.75 48.57 34.16 37.31 1-M1 Jmean 59.63 32.86 31.58 33.88 3-M100 Jnean 60.32 38.57 37.77 40.87\nThe Semantics of Embeddings: We evaluate the vector space representations that we obtain fron. Jmean on benchmarks (such as WordSim353 (Finkelstein et al.]2001) and SCwS (Huang et al.. 2012)) that attempt to measure the similarity of words. The algorithmically derived measure o. similarity is compared to a human-annotated score (between one and ten) using the Spearman ran! correlation. The models that we compare to primarily use local context, which yields a more precise. signal about the meanings of particular words. Closest to us in terms of training procedure is (Huang. (G)) in Table[3a] whose model we outperform. Finding ways to incorporate local context is fertile. ground for future work on models tailor-made for extracting embeddings..\nFor medical codes, we follow the method in (Choi et al.|2016). The authors build two kinds of evaluations to estimate whether an embedding space of medical diagnosis codes captures medically. related concepts well. MRMnDF-Rr (Medical Relatedness Measure under NDF-RT) leverages a database (NDF-RT) to evaluate how good an embedding space is at answering analogical queries. between drugs and diseases such as uDiabetes ~ uMetformin (uLung Cancer - UTarceva). (Metformin is a diabetic drug and Tarceva is used in the treatment of lung cancer). The evaluation (MRMccs.. measures if the neighborhood of the diagnosis codes is medically coherent using a predefined medical. ontology (CCS) as ground truth. The number computed may be thought of as a measure of precision. where a higher number is better. We refer the reader to the appendix for additional details..\nTable4 details the results on evaluating the medical embeddings. Once again, the baselines w. compare (Choi et al.]2016) are variants of Word2Vec that maximize the likelihood of the diagnosis codes conditioned on carefully crafted contexts. Our method performs comparably to the baselines even though it relies exclusively on global context and was not designed with this task in mind. Thi. setting depicts an instance where Jacobian vectors resulting from a deeper, better-trained mode outperform those from a shallow model, highlighting the importance of a method of interpretatior agnostic to the structure of the conditional probability functions in the generative model."}, {"section_index": "6", "section_name": "6 DISCUSSION", "section_text": "We explored techniques to improve inference and learning in deep generative models of sparse. non-negative data. We also developed and explored a novel, simple, yet effective method to interpre. the structure of the non-linear generative model via embeddings obtained from the Jacobian matrix. relating latent variables to observations. The embeddings are evaluated qualitatively and quantitatively. and were seen to exhibit interesting semantic structure across a variety of domains. Studying the effects of varying the priors on the latent variables, conditioning on context, and varying the neural. architectures that parameterize the conditional distributions suggest avenues for blending ideas from. generative modeling and Bayesian inference into building more powerful embeddings for data..\nTable 4: Medical Relatedness Measure: Evaluating the quality of embedding using medical (NDF-RT and. CCS) ontologies. Each column corresponds to a measure of how well the embedding space is amenable to. performing analogical reasoning (NDF-RT) or clusters meaningfully (CCS). A higher number is better. SCUIs. corresponds to the application of the method developed by (Choi et al.||2016) on data released by (Finlayson et al.2014). The learning algorithm for our embeddings does not use local context..\ndisease. Finally, for a qualitative evaluation of Jacobian vectors obtained from a model trained on movie ratings, we refer the reader to the appendix\nBetween the three choices of Jacobian vectors, we found that all three perform comparably on the. p1n we found that optimizing (x) improved the quality of the obtained Jacobian vectors on text and medical data. The full versions of Tables3|and4|can be found in the appendix.."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Ricardo Baeza-Yates, Berthier Ribeiro-Neto, et al. Modern information retrieval, volume 463. ACM press New York, 1999\nYoshua Bengio, Rejean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilisti language model. JMLR, 2003.\nBishop. Pattern Recognition and Machine Learning. Springer New York., 2006\nDavid M Blei, Andrew Y Ng, and Michael I Jordan. Latent dirichlet allocation. JMLR, 2003\nLeo Breiman. Statistical modeling: The two cultures. Statistical Science, 2001.\nYoungduck Choi, Chill Yi-I Chiu, and David Sontag. Learning low-dimensional representations o medical concepts. In AMIA, 2016.\nRonan Collobert and Jason Weston. A unified architecture for natural language processing: Deep neural networks with multitask learning. In ICML, 2008..\nDumitru Erhan, Yoshua Bengio, Aaron Courville, and Pascal Vincent. Visualizing higher-laye. features of a deep network. 2009.\nLev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, an Eytan Ruppin. Placing search in context: The concept revisited. In Www, 2001.\nSamuel G Finlayson, Paea LePendu, and Nigam H Shah. Building the graph of medicine from millions of clinical narratives. Scientific data, 2014\nGeoffrey E Hinton, Peter Dayan, Brendan J Frey, and Radford M Neal. The\"' wake-sleep'\" algorith. for unsupervised neural networks. Science, 1995.\nMatthew D Hoffman, David M Blei, Chong Wang, and John William Paisley. Stochastic variationa inference. JMLR, 2013\nEric H. Huang, Richard Socher, Christopher D. Manning, and Andrew Y. Ng. Improving Wor. Representations via Global Context and Multiple Word Prototypes. In ACL, 2012\nTommi S Jaakkola and David Haussler. Exploiting generative models in discriminative classifiers. In NIPS, 2007.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015\nDiederik P Kingma and Max Welling. Auto-encoding variational bayes. In ICLR, 2014\nDiederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. In NIPS, 2014.\nThomas K Landauer, Peter W Foltz, and Darrell Laham. An introduction to latent semantic analysis Discourse processes, 1998\nYuri Burda, Roger Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. In ICLR 2015.\nLance De Vine, Guido Zuccon, Bevan Koopman, Laurianne Sitbon, and Peter Bruza. Medical semantic similarity with a neural language model. In CIKM, 2014..\nShaohua Li, Tat-Seng Chua, Jun Zhu, and Chunyan Miao. Generative topic embedding: a continuous representation of documents. In ACL, 2016.\nRuslan Salakhutdinov and Hugo Larochelle. Efficient learning of deep boltzmann machines. In AISTATS, 2010.\nUlrike Von Luxburg. A tutorial on spectral clustering. Statistics and computing, 2007\nShengjie Wang, Matthai Plilipose, COM Matthew Richardson, COM Krzysztof Geras, Gregor Urban and EDU Ozlem Aslan. Analysis of deep neural networks with the extended data jacobian matrix In ICML, 2016.\nHugo Larochelle, Yoshua Bengio, Jerome Louradour, and Pascal Lamblin. Exploring strategies for training deep neural networks. JMLR, 2009..\nYishu Miao, Lei Yu, and Phil Blunsom. Neural variational inference for text processing. In ICML 2016.\nAndriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. In ICML, 2014.\nDanilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation anc approximate inference in deep generative models. In ICML, 2014.\nMaja R Rudolph, Francisco JR Ruiz, Stephan Mandt, and David M Blei. Exponential family embeddings. In NIPS, 2016\nThe Netflix dataset[Netflix|(2009) comprises movie ratings of 500, 000 users. We treat each user's. ratings as a document and model the numbers ascribed to each movie (from 1 - 5) as counts drawn from the multinomial distribution parameterized as in Eq.1 We train a three-layer DLGM on the dataset, evaluate Imean with 100 samples and consider two distinct methods of evaluating the learned embeddings. We cluster the movie embeddings (using spectral clustering with cosine distance to obtain 100 clusters) and depict some of the clusters in Table[6a We find that clusters exhibit coherent. themes such as documentary films, horror and James Bond movies. Other clusters (not displayed) included multiple seasons of the same show such as Friends, WwE wrestling, and Pokemon. In Table[6b] we visualize the neighbors of some popular films. In the examples we visualize, the nearest. neighbors include sequels, movies from the same franchise or, as in the case of 12 Angry Men, other dramatic classics.\nTo compare the effect of using a model to create embeddings versus using the raw data from a large. dataset directly, we evaluated nearest neighbors of movies using a simple baseline. For a query movie. we found all users who gave the movie a rating of 3 or above (nominally, they watched and liked the movie). Then, for all those users, we computed the mean ratings they gave to every other movie ir. the vocabulary and ranked them based on the mean ratings. We display the top five movies obtainec. using this approach in Table[6 The query words are the same as in Table[6b] For most of the queries. the difference between the two is evident and we simply end up with popular, well-liked movies. rather than relevant movies.\nWe study the effect of varying the inference network on the two-layer model trained with the RCV2 data. Folding fixed a three-layer DLGM with stochastic dimension 100 (same architecture as 3-t fidf in Table|1), we learn models on the RCV2 data using M = 1, 100 (we evaluate and report bounds on perplexity using M = 100) and display the results in Table 5] When M = 1 (the standard procedure for training VAEs and DLGMs), increasing the number of layers in the inference network decreases the quality of the model learned. One possible explanation for this is that the already noisy gradients of the inference network must propagate along a longer path in a deeper inference network slowing down learning of the parameters which in turn affects the quality of inference. In contrast increasing hidden dimension of the inference network improves results. Generally, we obtain better results (in both train and validation error) with M = 100 than training with M = 1 across the various configurations of the inference network that we tried. Furthermore, we find that when M = 100, the inference network architecture is less relevant and all models converge to approximately the same result suggesting that the procedure treats the output of the inference network as a crude initialization for the variational parameters and that the subsequent steps of optimization are primarily responsible for gains in learning.\nTable 5: Train and Test Perplexity on RCv2: The tables herein show the train and held out bounds on perplexity obtained while varying the structure of the inference network. The top table depicts results for M = 1 and the bottom table for M = 100. The values along the rows and columns depict different parameters for the inference network.\n1 layer 2 layer 3 layer Dimension Train Validate Train Validate Train Validate 100 337.01 351.26 357.28 371.37 390.00 407.15 M=1 400 322.94 340.45 331.36 347.95 338.09 355.64 1 layer 2 layer 3 layer Dimension Train Validate Train Validate Train Validate 100 317.84 333.64 318.03 334.54 318.04 335.92 M=100 400 314.75 332.35 314.04 332.40 313.95 332.04\nTable 6: Qualitative Evaluation of Movie Embeddings: We evaluate Imean using 100 Monte-Carlo samples to perform the evaluation in Tables|6a|and6b\nTable 7|is the full version of Table 3|in the main paper. We find that the three variants of the Jacobian vectors perform comparably across the board. The vectors obtained from shallow log-linear models appear to have the edge. The evaluation on the WordSim and SCwS datasets are done by. computing the Spearman rank correlation between human annotated rankings between 1 and 10 and an algorithmically derived measures of word-pair similarity. We first compute the distances between all word pairs. Our measure of similarity is obtained by subtracting the distances from the maximal distance across all word pairs."}, {"section_index": "8", "section_name": "Movies", "section_text": "Models Spearman p 100 Huang(G) 22.8 Huang 71.3 Glove 75.9 C&W 55.3 ESA 75 Tiered Pruned tf-idf. 76.9 65.8 1-M1 Jprob 69.7 mean 1-M1 Jpot 66.3 /mean 7log 1-M100 . 66.3 mean 1-M100 .3 7prob 70.9 mean 1-M100 Jpot 69.5 mean 45 46.9 3-M1 Jpot 43.9 mean 3-M100 Jlog 59.6 mean 3-M100 Jprob 59.6 mean 57.8 a) WordSim353: \"G\" denotes the model in\n(a) WordSim353: \"G\" denotes the model ir Huang et al. learned only with global contex in the document.\nTable 7: Semantic Similarity in Words: The baseline results are taken from (Huang et al.2012 C&W uses embeddings from the language model of (Collobert & Weston2008). Glove corresponds to embeddings by (Pennington et al.||2014). Three words: 'y2k, insufflate, sincere' from the evaluation datasets were not found in our vocabulary after pre-processing and not used in the evaluation"}, {"section_index": "9", "section_name": "APPENDIX D EHR DATA: EMBEDDINGS FOR DIAGNOSIS CODES", "section_text": "For EHR data in particular, the bag-of-diagnosis-codes assumption we make is a crude one since (1) we assume the temporal nature of the patient data is irrelevant, and (2) combining patient statistics over time renders it difficult for the generative model to disambiguate the correlations between codes that correspond to multiple diseases a patient may suffer from. Despite this, it is interesting that the Jacobian vectors still capture much of the meaningful structure among the diagnosis codes (c.f Table 2cl 8b). Here we provide additional details surrounding the evaluating of medical embeddings.\nMRMccs(V, G): The Agency for Healthcare Research and Quality's clinical classification software. (CCS) collapses the hierarchical ICD9 diagnosis codes into clinically meaningful categories. The. evaluation on CCS checks whether the nearest neighbors of a disease include other diseases related tc it (if they are in the same category in the CCS). Using the ICD9 hierarchy, the authors further split. the evaluation task into predicting neighbors of fine-grained and coarse grained diagnosis codes..\nFor a choice of granularity G E {fine,coarse}, V(G) e V denotes the subset of ICD9 codes in the vocabulary. Ig(v(i)) is one if the v's i'th nearest neighbor: v(i) is in the same group as v according to G.\n40 1 Ig(v(i)) MRMccs(V, G) = |v(G)| log2(i+1) vEV(G) k=1\nMRMnor-Rr(V, R): The other evaluation uses the National Drug File Reference Terminology. (NDF-RT) to evaluate analogical reasoning. The NDF-RT provides two kinds of relationships (R) between drugs and diseases: May-Treat (if the drug may be used to treat the disease) and May-Prevent Given $A as the embedding for a code A, this test automates the evaluation of analogies such as ODiabetes ~OMetformin Tarceva). Here v is the query code and s is a representation. unaConeer\nof the relationship we seek. (Metformin is a diabetic drug and Tarceva is used in the treatment of. lung cancer.) The evaluation we perform reports a number proportional to the number of times\n(b) SCwS: (S) denotes a single prototype approach versus (M) that denotes a multi- prototype approach (that leverages context)\nthe neighborhood of v - s contains r for the best value of s (computed from the set of all valid drug-disease relationships in the datasets.)\nGiven V* E V (concepts for which NDF-RT has at-least one substance with the given relation) IR (U401 (v - s)(i)) is one if any of the medical concepts in the top-40 neighborhood of the selected. medical concept v satisfies relation R.\nTable|8a|depicts two examples of using the learned embeddings in the Jacobian matrix to answer tasks queries related to drug-disease pairs. Table|8b|depicts clusters found in medical diagnosis codes. Table 9|is the extended version of Table4|in the main paper (where, for the comparison on NDF-RT,. we depict the results obtained from the best choice of seed s)..\n(b) Medical Embeddings (Clustering): We visualize some topical clusters of diagnosis codes\nTable 9: Medical Relatedness Measure: Evaluated using the NDF-RT and CCS ontologies. For the evaluation. on NDF-RT, there is a choice of s (Eq[7) to be made. The results are reported both by averaging across all pairs of drugs-diseases used to form s (avg-seed) and the best drug-disease pair (max-seed). SCUIs corresponds to the application of the method developed by (Choi et al.2016) on data released by (Finlayson et al.2014)\nModels MRMNDF-RT MRMNDF-RT MRMccs MRMccs (May Treat) (May Prevent) (Fine Grained) (Coarse Grained) (De Vine et al.l[2014) 31.34/53.21 34.47/57.14 22.63 24.56 Choi et al. 2016) 36.62/59.40 28.02/55.71 44.80 47.43 SCUI 34.89/52.75 30.95/48.57 34.16 37.31 1-M1 Jmean 4.08/54.36 19.90/34.29 30.82 34.04 1-M1 Jmean 33.87/56.19 22.41/34.29 31.76 35.07 1-M1 Jmea 27.45/59.63 15.02/32.86 31.58 33.88 1-M100 Jmean 33.32/54.82 19.02/35.71 33.04 35.78 30.70/53.90 21.66 / 34.29 32.86 35.66 28.49/55.73 15.56/31.43 32.87 35.09 3-M1 Jmean 33.21/52.29 23.84/42.86 32.80 37.58 12.12/30.28 7.62/ 17.14 23.42 26.85 33.30/51.61 23.47/41.43 33.02 37.84 37.00/61.47 23.70/42.86 37.54 40.52 9.79/21.33 4.39 / 11.43 7.82 8.60 36.11/60.32 22.26/38.57 37.77 40.87\n1 In(Us21(v-s)(i)) MRMNDF-RT(V, R) V* vEV*\nHlammation arteriovenous fstula (a) Medical Analogies: We can perform analogical reasoning with embeddings of medical codes. If we know a drug used to treat a disease, we can use their relationship in vector space to find unknown drugs associated with.. a different disease. The queries are of the form Code 1->Code 2 = > Code 3->?. Sicca syndrome or Sjogren's disease is an immune disease treated with Evoxac and Methotrexate is commonly used to treat Rheumatoid Arthiritis. \" Leg Varicosity\"' denotes the presence of swollen veins under the skin. \"Ligation of angioaccess. arteriovenous fistula' denotes the tying of passage between an artery and a vein..\nabel Diagnosis Codes nrombosis Hx Venous Thrombosis, Compression Of Vein, Renal Vein Thrombosis ccular Atrophy Optic Atrophy, Retina Layer Separation, Chronic Endophthalmitis rug Use Opioid Dependence, Alcohol Abuse-Continuous, Hallucinogen Dep"}] |
B1jnyXXJx | [{"section_index": "0", "section_name": "CHARGED POINT NORMALIZATION", "section_text": "Armen Aghajanyan\nBellevue. WA 98007. USA\nRecently, the problem of local minima in very high dimensional non-convex op. timization has been challenged and the problem of saddle points has been intro duced. This paper introduces a dynamic type of normalization that forces the. system to escape saddle points. Unlike other saddle point escaping algorithms. second order information is not utilized, and the system can be trained with ar arbitrary gradient descent learner. The system drastically improves learning in a. range of deep neural networks on various data-sets in comparison to non-CPN. neural networks."}, {"section_index": "1", "section_name": "1.1 INTRODUCTION", "section_text": "Recently more and more attention has focused on the problem of saddle points in very high dimen sional non-convex optimization. Saddle points represent points in the optimization problem where the first order gradients are all zero, but the stationary point is neither a maxima or a minima. The saddle point of a function can be confirmed by using the eigenvalues of Hessian matrix. If the set o. eigenvalues contains at least one negative eigenvalue and at least one positive eigenvalue the poin is said to be a saddle point. One way to analyze the prevalence of saddle point is to assign a join. probability density to the eigenvalues of the Hessian matrix at a critical point..\n:An Jd1d2...d d1d2. ...d\nAnother way to interpret the expression above is to realize that each of the two n-integrals represents the joint density summation of the two hyper-cubes, one in the direction of all the positive axis, anc the other in all the negative axis. Each respectively representing minimas and maximas\nTheorem 1 The space of eigenvalues of a non-singular Hessian matrix that represent minimas and maximas in comparison to the total space, decreases by 2n asymptotically."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": ". If the eigenvalues are all negative, then the critical point is a local maximum . If the eigenvalues are all positive, then the critical point is a local minimum. . If the eigenvalues contain at least one positive and at least one negative eigenvalue then point is a saddle point.\nIf p(1, A2, ..., An) is the joint probability density function then the probability that the Hessian matrix resembles a saddle point is, given that the Hessian is not singular is:.\nThe amount of unique hypercubes starting from the origin and spanning along the axis is 2n. The. amount of hypercubes representing minimas and maximas is two. Therefore the fraction of the space that contains the eigenvalues that would indicate either a minima or a maxima is 21-n, where. n represents the dimensionality of the Hessian matrix..\nAlthough this interpretation gives some intuition behind the saddle point problem, we cannot con. clusively say that the probability of a critical point being a saddle point increases exponentiall because we do not know the behavior of the joint probability function..\nTo understand the shortcomings of first order gradient descent algorithms around saddle points we will analyze the neighborhood a saddle point. Given a function f, the Taylor expansion around the saddle point x is given by:\n1 fx+)=fx+ 8T H 8\ne1 1 8 U = 2 n n 1 fx+)=fx)+ e8)2=f(x)+ 2 i=1 i=1\nThis shows that the direction of the gradient descent algorithm is not the problem with gradien descent algorithms around saddle points, but rather the step of the algorithm. This problem is some times amplified because of the plateaus surrounding the critical point, as shown in (Saad & Solla 1996). Another complication visible from equation 2, is that if the step size is greater than max -1 the gradient descent algorithm will begin to diverge. Therefore one large eigenvalues of the surface of the error function, will force the gradient descent algorithms to take very small steps in all the. other directions.\nA very similiar derivation and explanation was shows in (Dauphin et al. 2014"}, {"section_index": "3", "section_name": "2.1 METAPHOR", "section_text": "The metaphor for our method goes as follows. The current point in our optimization is a smal positively charged point that is moving on the neutral surface of error. Our normalization work by dynamically placing other positively charged points around the surface of error to 'push' ou optimization point away from undesirable positions. Optimally we would run the gradient descen algorithm until convergence, check if the converged point is a saddle point, place a positively charge point near the saddle point and continue the optimization. This metaphor was what gave inspiratior to the derivation of our normalization.\nWhat this shows is that as we increase the dimensionality of our optimization problem, the fraction of the total space that represents either a local minima or maxima decreases exponentially by a factor Of two.\nThe first order term disappears because we are at a critical point. Denoting e1, e2, ..., en as the eigen vectors of the non-degenerate Hessian H, and A1, A2, ..., An as the respective eigenvalues, we can use the change of coordinates methods to rewrite the Taylor expansion in terms of the eigenvectors:\nFrom the last equation we can analyze the behavior of first order gradient descent algorithms. Specif ically by looking at the behavior with respect to the signs of the eigenvalues. If eigenvalue A; is pos itive then the optimization point will move toward the critical point x. If eigenvalue A, is negative the optimization point will move away from the critical point\nThe general optimization problem is defined as\nn Lt(f;X,Y)= V(f(Xi),Yi)+XRt(f i=1\nThe update for the merge values is defined as:\nwhere e is a source of random error to ensure we do not have a division by zero. In our experiments e was a matrix of the same size as W' with random entries sampled from a normal distribution with a zero mean and a very small standard deviation.\nWhat this type of normalization attempts to do is to reward the optimization algorithm for taking. steps that maximize the distance between the new point and the trailing point. This can be seen as a. more dynamic and adaptive version of momentum that kicks in when the optimization problem set tles down into a saddle point or long plateau. That being said, CPN can still be used with traditiona. momentum methods, as shown by the experiments below.."}, {"section_index": "4", "section_name": "2.3 CHOICE OF HYPERPARAMETER", "section_text": "The function can be any function that merges the two parameters into one parameter of the sam dimension. Throughout this whole paper we used the exponential moving average for our function\nAlthough to keep up with the metaphor Coulomb's inverse squared law did not work as well as projected, through trial and error, the p value that worked the best was 1. The 1-norm simply is the sum of absolute values.\nCharged Point Normalization was implemented in Theano (Bastien et al.2012) and integrated witl the Keras (Chollet2015) library. We utilized the convolutional neural networks and recurrent net-. works in the keras library as well. All training and testing was run on a Nvidia GTX 980 GPU. We.\nn (f;X,Y)=> V(f(Xi),Yi)+XR(f) i=1\nThe formulation is static, given the same function and the same X and Y the loss will always be equal. Our formulation introduces a dynamic normalization function R. Therefore the loss function becomes defined as:\nThe function f contains dynamic parameters Wt, W, ..., Wt, while the function R contains pa- rameters: , p, and Wt, Wt, ..., Wt, symbolizing the decay factor, norm, merge function and. merge values respectfully. The t term in W represents the value of Wn at time t of the optimization algorithm. Charged Point Normalization is now defined as:.\n-1 l|Wt - Wtllz\nWt+1 = $(Wt, Wt W =W+ e\np(Wt, w) = aWt +(1 - a)W, Q E (0,1)\ndo not show results on a validation set, because we care about the efficiency and performance of the. optimization algorithm, not whether or not it overfits. The over-fitting of a model is not the fault of. the optimization routine but rather the fault of the field it is optimizing over. All comparisons be-. tween the standard and charged model, started with identical set's of weights. Throughout all of our experiments, we utilize a softmax layer as the final layer, and consequently all the losses measurec throughout this paper will be in terms of categorical cross-entropy. We used the train split of each. data-set."}, {"section_index": "5", "section_name": "3.2.2 MNIST:DEEP AUTOENCODER", "section_text": "The second test conducted on simple neural networks, was in the form of an autoencoder. The ar. chitecture of the autoencoder contained layers with sizes 784 -> 512-> 512->10 -> 512-> 512->10 All layers contained rectified linear activations. Between layers, dropout with a probability of 0.2. was added. The set up of the experiment is almost identical to the previous experiment. The only. difference being that in this case, we optimized for binary cross-entropy..\nLoss on MNIST Dataset (MLP) Loss on MNIST Dataset (AutoEncoder) Charged Charged Standard Standard 0.8 0.6 0.6 0.4 0.4 0.2 0 0.2 0 20 40 60 80 100 0 20 40 60 80 100 Epoch Epoch\nIt is interesting to note that when the optimization problem is relatively simple, more specifically if the optimization algorithm takes smooth steps, CPN allows the optimization algorithm to take. bigger steps in the correct direction. CPN does not display any periodic or chaotic behavior in this. scenario. This is not the case for more complicated optimization problems that will be presented. below.\nThe next experiment conducted was using a convolutional neural network on the CIFAR (Krizhevsky et al.|a). The architecture was as such:.\nConvolution2D takes the parameters, number of filters, width and height respectfully. Dense take one parameter describing the size of the layer. MaxPooling takes two parameters that signify the\nThe first test conducted was using a multilayer perceptron on the MNIST dataset. The architecture of the neural net contained layers with sizes 784 -> 512-> 512->10. All intermediate layers contained rectified linear activations (He et al.|2015), while the final layer is a softmax layer. Between layers. dropout (Hinton et al.]2012) with a probability of 0.2 was added. We compare the standard batch gradient descent algorithm with a step size of O.001 and batchsize of 128, on the net described above and the same net with Charged Point Normalization (CPN). The CPN hyper-parameters were: = 0.001, X = 0.1 with the moving average parameter a = 0.95. The loss we were optimizing over was categorical cross entropy.\nConvolution2D (32,3,3) -> ReLU -> Convolution2D (32,3,3) -> ReLU -> MaxPooling (2,2) -) Dropout (0.25) -> Convolution2D (64,3,3) -> ReLU -> Convolution2D (64,3,3) -> ReLU -> Max Pooling (2.2) -> Dropout (0.25) -> Dense (512) -> ReLU -> Dropout (0.5)-> Dense (10) > Softmax\nLoss on CIFAR10 Dataset Accuracy on CIFAR10 Dataset 1.8 Charged 1.6 Standard 1.4 0.8 1.2 0.6 1 0.8 0.4 0.6 0.4 0.2 Charged 0.2 Standard 0 0 0 20 40 60 80 100 0 20 40 60 80 100 Epoch Epoch\nLoss on CIFARTO Dataset 1.8 1.6 Charged Standard 1.4 0.8 1.2 0.6 1 0.8 0.4 0.6 0.4 0.2 Charged 0.2 Standard 0 0! 0 20 40 60 80 100 0 20 40 60 80 100 Epoch Epoch\nIt is interesting to note that CPN performs worse until the optimization algorithm reaches the 'elbow of the curve, where then CPN continues along its path, while the standard algorithm begins tc converge. CPN also takes steps that are much less 'optimal' in the greedy sense, which is why both the loss and accuracy curve behave partially chaotic.\nThe CIFAR100 (Krizhevsky et al.l b) setup was nearly identical to the CIFAR10 setup. The same architecture of the neural network was used. The only difference was in the parameter in the normalization term, which in this case was equal to 0.01. 20, 000 random images were used.\nLoss on CIFAR100 Dataset Accuracy on CIFAR100 Dataset 5 4.5 Charged Standard 4 0.8 3.5 3 0.6 2.5 2 0.4 1.5 11 0.2 Charged 0.5 Standard 0 0 20 40 60 80 100 0 20 40 60 80 100 Epoch Epoch\nLoss on CIFAR100 Dataset Accuracy on CIFAR100 Dataset 5 4.5 Charged Standard 4 0.8 3.5 3 0.6 2.5 2 0.4 1.5 1 0.2 Charged 0.5 Standard 0 0 0 20 40 60 80 100 0 20 40 60 80 100 Epoch Epoch\nThe same behavior as in the CIFAR10 experiment was exhibited. The elbow of the loss curve was the point where CPN began to outperform standard optimization.\npool size. ReLU is the rectified linear function, while Softmax is the softmax activation function The optimization algorithm used was stochastic gradient descent with a learning rate of O.01, decay of 1e - 6, momentum of 0.9, with nesterov acceleration. The batch size used was 32. The hyper- parameters for CPN were: = 0.01, X = 0.1 with the moving average parameter a = 0.95. 10,000 random images were used from the CIFAR10 data-set instead of the full dataset to speed up learning.\nRNN RNN Concat Dense Figure 1: Architecture for BABI Test 4.2 BABI Loss on BABI Pathfinding Task: GRU 2.6 Charged Standard 2.55 2.5 2.45 2.4 50 100 150 200 250 300 Epoch Accuracy on BABI Pathfinding Task: GRU Charged 0.14 Standard 0.12 50 100 150 200 250 300 Epoch Loss on BABI Pathfinding Task: LSTM Accuracy on BABI Pathfinding Task: LSTM 2.6 Charged Charged Standard 0.14 2.55 2.5 0.12 2.45 0.1 2.4 100 150 200 100 150 50 50 200 Epoch Epoch\nWe selected the path-finding problem of the BABI dataset due to it being the most difficult task.. The architecture consisted of two recurrent networks and one standard neural network. Each of the recurrent neural networks had a structure: Embedding -> RNN. The embedding, sentence and. query hidden layer size was set to 3. The final network concatenated the two recurrent network. outputs and fed the result into a dense layer with an output size of vocabsize. Refer to figure|1|for. a diagram. We ran our experiment with two different recurrent neural network structures: Gated. Recurrent Units (GRU) (Chung et al.]2014) and Long Short Term Memory (LSTM) (Hochreiter & Schmidhuber1997) . The ADAM (Kingma & Ba]2014) optimization algorithm was used for both recurrent structures with the parameters: = 0.001, 1 = 0.9, 2 = 0.999, e = 1e 08. For the LSTM architecture, CPN hyper-parameters were: = 0.0025, X = .03, = 0.95. For the GRU architecture, CPN hyper-parameters were: = 0.1, X = .1, = 0.95.\nFigure 1: Architecture for BABI Test\nCharged: Eigenvalue Distribution Vanilla: Eigenvalue Distribution 60 60 80 8 1->H H->O I->H H->O 70 50 50 60 40 40 50 5 30 30 40 30 3 20 20 20 2 10 10 10 O 9000- 9000 00 900 0022 005 003 9000- 00 9000 0003 0022 0005 00 900- : 900"}, {"section_index": "6", "section_name": "4.1 EXPLORATION YS EXPLOITATION", "section_text": "In a standard gradient descent with no normalization, the updates taken by the algorithm are al ways greedy, in terms of always minimizing the loss of the model. Their is no exploration done gradient descent is by nature a greedy algorithm, optimizing only locally. What CPN allows the gradient descent to do, is to move in non-optimal directions early on in the optimization routine while still allowing for precise finetuning at the end of the model. This trade-off is controlled by the 3 parameter."}, {"section_index": "7", "section_name": "4.2 BEHAVIOR AROUND SADDLE POINTS", "section_text": "A vanilla neural network with one single hidden layer was trained on a down sampled 8 8 versior of the MNIST dataset (Lecun et al.]1998). Full gradient descent was ran on the 10, 000 random. images until convergence. We compare the differences between the eigenvalue distributions between. the neural network with CPN and the neural network without it. Recall the tighter the range of. the eigenvalues is, the larger steps the gradient descent algorithm can take without worrying about. divergence as explained in section 1.2.\nThe graph above shows a kernel density estimation done on the input to hidden and hidden to output Hessian's at the near critical point. There are both negative and positive eigenvalues, especially in the hidden to output weights, therefore it is safe enough to say that we are at a saddle point (Turlach,1993). The first graph represents the CPN neural network while next graph represents a non-normalized neural network. The CPN network shows a tighter distribution as well as more of the eigenvalues being focused near O..\nTo ensure that the normalization is actually repelling the optimization point from saddle points, anc that the results achieved in the experimental section are not due to some confounding factors we utilize a low-dimensional experiment to show the repelling effects of CPN.\nWe utilize the monkey saddle as the optimization surface. The monkey saddle has a saddle poin surrounded by plateaus in all directions. It is defined as x3 - 3xy2. Referring to section 1.2, we discussed that the direction of gradient descent algorithms was not the shortcoming of gradien descent algorithms around saddle points,but rather the problem was with the step size. What CPN should in theory do is allow the optimization routine to take larger steps.\nFrom the accuracy graphs we can see the CPN causes the recurrent network to escape the saddle point earlier than a recurrent network with no CPN..\nTable 1: Hyper-parameters for Toy-Problem\nBelow are two figures. The first one shows the behavior of a five common gradient descent al- gorithms starting at a point near the saddle point (point: (x = 0.0001, y = -0.0001)) (Zeiler 2012), (Duchi et al.J2010). The next figure shows the same algorithms starting at the same point but utilizing CPN. All visualization were done using the matplotlib library (Hunter2007)..\nThe hyper-parameters used, were all the default hyper-parameters in the keras library apart fron. Adam (to make it visible on the graphs). All hyper-parameters are available in Table [1 SGI Accelerated refers to the standard SGD algorithm using momentum and nesterov acceleration. The CPN parameters were chosen using a very small discrete grid-search. In reality just about any reasonable arbitrary parameters can be chosen in order for CPN to work in this experiment, a grid. search was not neccessary to find a solution. This is why we reuse two sets of hyper-parameters for this toy problem. (Nesterov1983).\nSGD SGD+Momentum/Nesterov AdaGrad AdaDelta 2.0 Adam 1.5 1.0 0.5 0.0 0.5 1.0 -1.5 2.0 2.0 1.5 1.0 2.0-1.5-1.0 0.5 0.0 0.5 0.5 0.0 0.5 1.0 1.0 1.5 1.5 2.0 2.0\nFigure 2: Non-CPN Optimization Paths\nEach algorithm performed better when coupled with CPN than without, the loss was computed using. the monkey saddle equation above. All the losses for both CPN and Non-CPN are available in Table 2 CPN allowed the optimization algorithms to escape the saddle point quickly even though the. gradient near the starting point of the optimization was near zero..\nAlgorithm CPN LR Momentum p B1 B2 Q B 0.01 0 NA NA NA 0.1 1.0 0.1 rated 0.01 0.9 NA NA NA 0.1 1.0 0.1 0.01 NA NA NA NA .5 1.0 0.001 1.00 NA 0.95 NA NA .5 a 1.0 0.001 0.01 NA NA 0.9 0.999 .5 1.0 0.001\nTable 2: Final Loss After 120 Iterations for Toy-Problem\nNon-CPN CPN SGD -2.00428E - 12 -8.192E9 SGD Accelerated -2.04018E - 12 8.192E9 AdaGrad -1.75024E - 11 0.00463 AdaDelta -2.46194E - 12 -2.22216 Adam -12.8413 12.9671\nSGD SGD+Momentum/Nesterov AdaGrad AdaDelta 2.0 Adam 1.5 1.0 0.5 0.0 -0.5 1.0 1.5 -2.0 2.0 1.5 1.0 -2.01.5-1.00.5 0.5 0.0 0.5 0.0 1.0 0.5 1.0 -1.5 1.5 2.0\nFigure 3: CPN Optimization Paths\nWithout CPN only the Adam algorithm escaped the plateau in less than a 100o iterations.. With CPN every algorithm apart from AdaGrad successfully escaped the plateau in less. than 120 iterations, most notable being SGD Accelerated, which escaped in just 8 iterations\nFrom this toy example we can conclude that CPN does in fact repel the optimization algorithm away from saddle points, and therefore the results from the experiments are due to this phenomena and most likely no other confounding factors."}, {"section_index": "8", "section_name": "4.4 PERIODICITY AND TERMINAL BEHAVIOR", "section_text": "As shown in the experiments done on the CIFAR datasets, CPN has a tendency to force the opti. mization algorithm to act more chaotically. The exponential term in the normalization term is there to ensure that the optimization algorithm does not get stuck in an periodic path. It is trivial to see. that as the time of the optimization goes toward infinity the impact of the normalization will tend. toward O. Therefore if the optimization algorithm does not reach a local minimum, but is rather in. an elliptical path, assuming that the A term is not great enough to push the point out of the local. minimum, the optimization algorithm will eventually reach the local minimum.."}, {"section_index": "9", "section_name": "NOTES ON HYPER-PARAMETERS", "section_text": "Due to restrictions on our hardware resources, we did not have enough time to run a comprehen sive study on the behavior of CPN with respect to its hyper-parameters. Throughout this paper the selection of hyper-parameters was kept rather simple. We selecting the hyper-parameters in a fea- sible range, and then adjusted them by either by hand around 4-8 times, or similarly by running a basic discrete grid-search that ran over that same amount of hyper-parameters. So in no way are the hyper-parameters for CPN chosen in this paper optimal for the various setups explained, but yet the results we found were somewhat substantial, which we find quite optimistic."}, {"section_index": "10", "section_name": "7 CONCLUSION", "section_text": "In this paper we introduced a new type of dynamic normalization that allows gradient based opti mization algorithms to escape saddle points. We showed empirical results on standard data-sets, tha show CPN successful escapes saddle points on various neural network architectures. We discussed the theoretical properties of first order gradient descent algorithms around saddle points as well as discussed the influence of the largest eigenvalue on the step taken. Empirical results were shown that confirmed the hunch that the hessian of charged point normalized neural networks contains eigenvalues which are less in magnitude than their non-normalized counterpart."}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Frederic Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud. Bergeron, Nicolas Bouchard, and Yoshua Bengio. Theano: new features and speed improvements Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012..\nFranois Chollet. Keras. https://github. com/fchollet/keras. 2015.\nYann Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex op timization CoRR abs/1406. 2014 I IRIh++ n 0rq /abs /1 406 2 5 72\nCPN with a exponential moving average for the function, introduces 2 extra hyper-. parameters, not including the normalization scalar X.. In terms of implementation; CPN doubles the amount of memory needed for the optimiza tion problem, as a trailing copy of the parameters must be kept.. The fraction term in CPN will generally contain small floating points in both numerator. and denominator and this can sometimes lead to numerical instability.. If saddle points are reached at a really late time in the optimization algorithm, the expo-. nential decay will nullify the effects of CPN. A possible solution would be to substitute the exponential decay term with some type of periodic decay.\nGeoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhut dinov. Improving neural networks by preventing co-adaptation of feature detectors. CoRR.. abs/1207.0580.2012. URLhttp://arxiv.0rg/abs/1207.0580\nDiederik P. Kingma and Jimmy Ba.. Adam: A method for stochastic optimization.. CoR abs/1412.6980.2014. URLhttp://arxiv.0rq/abs/1412.6980\nAlex Krizhevsky, Vinod Nair, and Geoffrey Hinton. Cifar-1oo (canadian institute for advancec research). b. URLhttp://www.cs.toronto.edu/~kriz/cifar.html.\nRazvan Pascanu, Tomas Mikolov, and Yoshua Bengio. Understanding the exploding gradient prob lem. CoRR, abs/1211.5063, 2012. URLhttp://arxiv.0rg/abs/1211.5063\nDavid Saad and Sara A. Solla. Dynamics of on-line gradient descent learning for multi layer neural networks. Advances in neural information processing systems, 8:302-308, 1996 ISSN 1049-5258. Copyright of Massachusetts Institute of Technology Press (MIT Press http://mitpress.mit.edu/mitpress/copyright/default.asp.\nBerwin A. Turlach. Bandwidth Selection in Kernel Density Estimation: A Review. In CORE and Institut de Statistique, pp. 23-493, 1993. URL http://citeseerx.ist.psu.edu/ viewdoc/summary?doi=10.1.1.44.6770"}] |
BJC8LF9ex | [{"section_index": "0", "section_name": "RECURRENT NEURAL NETWORKS FOR MULTIVARI ATE TIME SERIES WITH MISSING VALUES", "section_text": "Zhengping Che, Sanjay Purushotham\nDepartment of Computer Science University of Southern California Los Angeles, CA 90089, USA S1 ch\nkyunghyun.cho@nyu.edu, dsontag@cs.nyu.edu"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Multivariate time series data are ubiquitous in many practical applications ranging from health car geoscience, astronomy, to biology and others. They often inevitably carry missing observations du to various reasons, such as medical events, saving costs, anomalies, inconvenience and so on. It ha been noted that these missing values are usually informative missingness (Rubin]1976), i.e., th missing values and patterns provide rich information about target labels in supervised learning task (e.g, time series classification). To illustrate this idea, we show some examples from MIMIC-II a real world health care dataset in Figure[1 We plot the Pearson correlation coefficient betwee variable missing rates, which indicates how often the variable is missing in the time series, and th labels of our interests such as mortality and ICD-9 diagnoses. We observe that the missing rate i correlated with the labels, and the missing rates with low rate values are usually highly (either positiv or negative) correlated with the labels. These findings demonstrate the usefulness of missingnes patterns in solving a prediction task.\nIn the past decades, various approaches have been developed to address missing values in time series (Schafer & Graham2002). A simple solution is to omit the missing data and to perform analysis only on the observed data. A variety of methods have been developed to fill in the missing values, such as smoothing or interpolation (Kreindler & Lumsden2012), spectral analysis (Mondal & Percival[2010), kernel methods (Rehfeld et al.[2011), multiple imputation (White et al.[2011)\nDepartment of Computer Science University of Southern California Los Angeles, CA 90089, USA"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Multivariate time series data in practical applications, such as health care, geo science, and biology, are characterized by a variety of missing values. In time series prediction and other related tasks, it has been noted that missing values and their missing patterns are often correlated with the target labels, a.k.a., informative miss ingness. There is very limited work on exploiting the missing patterns for effective imputation and improving prediction performance. In this paper, we develop novel deep learning models, namely GRU-D, as one of the early attempts. GRU-D is based on Gated Recurrent Unit (GRU), a state-of-the-art recurrent neural network It takes two representations of missing patterns, i.e., masking and time interval and effectively incorporates them into a deep model architecture so that it not only captures the long-term temporal dependencies in time series, but also utilizes the missing patterns to achieve better prediction results. Experiments of time series classification tasks on real-world clinical datasets (MIMIC-III, PhysioNet) and synthetic datasets demonstrate that our models achieve state-of-the-art performance and provides useful insights for better understanding and utilization of missing values in time series analysis.\n0.2 0.2 20 20 20 0.1 0.1 40 40 0 40 0 60 60 -0.1 60 -0.1 -0.2 -0.2 80 80 80 08 1 5 10 15 20\n0.2 0.2 20 20 20 0.1 0.1 40 40 0 40 0 60 60 -0.1 60 -0.1 -0.2 -0.2 80 80 80 0.8 1 5 10 15 20\nand EM algorithm (Garcia-Laencina et al.2010).Schafer & Graham (2002) and references thereir provide excellent reviews on related solutions. However, these solutions often result in a two step process where imputations are disparate from prediction models and missing patterns are no1 effectively explored, thus leading to suboptimal analyses and predictions (Wells et al.2013).\nIn the meantime, Recurrent Neural Networks (RNNs), such as Long Short-Term Memory (LST M) (Hochreiter & Schmidhuber1997) and Gated Recurrent Unit (GRU) (Cho et al.]2014), have shown to achieve the state-of-the-art results in many applications with time series or sequential data including machine translation (Bahdanau et al.[2014, Sutskever et al.2014) and speech recogni tion (Hinton et al.]2012). RNNs enjoy several nice properties such as strong prediction performance as well as the ability to capture long-term temporal dependencies and variable-length observations RNNs for missing data has been studied in earlier works (Bengio & Gingras|1996)Tresp & Briegel 1998] Parveen & Green2001) and applied for speech recognition and blood-glucose prediction Recent works(Lipton et al.]|2016] [Choi et al.][2015) tried to handle missingness in RNNs by concate nating missing entries or timestamps with the input or performing simple imputations. However, ther have not been works which model missing patterns into a systematically modified RNN structure fo time series classification problems. Exploiting the power of customized RNN models along with the informativeness of missing patterns is a new promising venue to effectively model multivariate time series and is the main motivation behind our work.\nIn this paper, we develop a novel deep learning model based on GRU, namely GRU-D, to effectively . exploit two representations of informative missingness patterns, i.e., masking and time interval. Masking informs the model which inputs are observed (or missing), while time interval encapsulates. the input observation patterns. Our model captures the observations and their dependencies by applying masking and time interval (using a decay term) to the inputs and network states of GRU. and jointly train all model components using back-propagation. Thus, our model not only captures. the long-term temporal dependencies of time series observations but also utilizes the missing patterns. to improve the prediction results. Empirical experiments on real-world clinical datasets as well as. synthetic datasets demonstrate that our proposed model outperforms strong deep learning models. built on GRU with imputation as well as other strong baselines. These experiments show that our. proposed method is suitable for many time series classification problems with missing data, and in. particular is readily applicable to the predictive tasks in emerging health care applications. Moreover,. our method provides useful insights into more general research challenges of time series analysis with missing data beyond classification tasks, including 1) a general deep learning framework to. handle time series with missing data, 2) effective solutions to characterize the missing patterns of not. missing-completely-at-random time series data such as modeling masking and time interval, and 3). an insightful approach to study the impact of variable missingness on the prediction labels by decay. anal ysis.\nWe denote a multivariate time series with D variables of length T as X = (x1, x2,..., xT) RT D, where for each t E {1, 2, ..., T}, xt E RD represents the tth observations (a.k.a., measure- ments) of all variables and xd denotes the measurement of dth variable of xt. Let st E R denote the time-stamp when the tth observation is obtained and we assume that the first observation is made at\nFigure 1: Demonstrations of informative missingness on MIMIC-III dataset. Left figure shows variable missing rate (x-axis, missing rate; y-axis, input variable). Middle/right figures respectively shows the correlations between missing rate and mortality/ICD-9 diagnosis categories (x-axis, target label; y-axis, input variable; color, correlation value). Please refer to Appendix[A.1for more details.\nmpullinne Serles(z Varlabies; : Masking Tor A s: Timestamps for X; A: Time interval for X. [47 49 NA 40 NA 43 55] 1 1 0 1 0 1 11 X = M [NA 15 14 NA NA NA 15 0 1 1 0 0 0 1 J [0.0 0.1 0.5 1.5 0.6 0.9 0.6] s = [0 0.1 0.6 1.6 2.2 2.5 3.1] 4 [0.0 0.1 0.5 1.0 1.6 1.9 2.5]\nFigure 2: An example of measurement vectors xt, time stamps st, masking mt, and time interval t\n1. if xd is observed YC 0, otherwise\nFor each variable d, we also maintain the time interval sd E R since its last observation as\nt >1,m t>1,mg-1 St - St-1, 0, t = 1\nWe investigate the use of recurrent neural networks (RNN) for time-series classification, as thei recursive formulation allow them to handle variable-length sequences naturally. Moreover, RNN shares the same parameters across all time steps which greatly reduces the total number of parameter. we need to learn. Among different variants of the RNN, we specifically consider an RNN with gatec recurrent units (Cho et al.2014f Chung et al.[[2014), but similar discussion and convolutions are alsc valid for other RNN models such as LSTM (Hochreiter & Schmidhuber1997).\nrt =o(Wrxt+Urht-1+br) Zt =(Wzxt+Uzht-1+bz ht = tanh(Wxt+U(rt O ht-1)+ b) ht=(1-Zt)Oht-1+ztO h\nwhere matrices W z, Wr, W, Uz, Ur, U and vectors bz, br, b are model parameters. We use fo element-wise sigmoid function, and O for element-wise multiplication. This formulation assume. that all the variables are observed. A sigmoid or soft-max layer is then applied on the output of the GRU layer at the last time step for classification task.\nExisting work on handling missing values lead to three possible solutions with no modification on GRU network structure. One straightforward approach is simply replacing each missing observation\nm IN >Out (a) GRU (b) GRU-D\nFigure 3: Graphical illustrations of the original GRU (left) and the proposed GRU-D (right) models\nThe structure of GRU is shown in Figure|3(a) For each jth hidden unit, GRU has a reset gate r] and an update gate z to control the hidden state h at each time t. The update functions are shown as follows:\nMASK m IN IN >OUT > OUT (a) GRU (b) GRU-D\nwith the mean of the variable across the training examples. In the context of GRU, we have\nt m+xt m\nA second approach is exploiting the temporal structure in time series. For example, we may assume any missing value is same as its last measurement and use forward imputation (GRU-forward), i.e.\nIm'xt 1-m -\nwhere t' < t is the last time the d-th yariable was observed\nInstead of explicitly imputing missing values, the third approach simply indicates which variables are missing and how long they have been missing as a part of input, by concatenating the measurement masking and time interval vectors as\nThese approaches solve the missing value issue to a certain extent, However, it is known that imputing. the missing value with mean or forward imputation cannot distinguish whether missing values are. imputed or truly observed. Simply concatenating masking and time interval vectors fails to exploi the temporal structure of missing values. Thus none of them fully utilize missingness in data to. achieve desirable performance."}, {"section_index": "3", "section_name": "2.2 GRU-D: MODEL WITH TRAINABLE DECAYS", "section_text": "To fundamentally address the issue of missing values in time series, we notice two importan properties of the missing values in time series, especially in health care domains: First, the value of the missing variable tend to be close to some default value if its last observation happens a long time ago. This property usually exists in health care data for human body as homeostasis mechanisms and is considered to be critical for disease diagnosis and treatment (Vodovotz et al.|2013). Second the influence of the input variables will fade away over time if the variable has been missing for a while. For example, one medical feature in electronic health records (EHRs) is only significant ir a certain temporal context (Zhou & Hripcsak2007). Therefore we propose a GRU-based mode called GRU-D, in which a decay mechanism is designed for the input variables and the hidden state. to capture the aforementioned properties. We introduce decay rates in the model to control the decay mechanism by considering the following important factors. First, each input variable in health care time series has its own medical meaning and importance. The decay rates should be flexible to differ from variable to variable based on the underlying properties associated with the variables\nwhere x can be either from Equation . We later refer to this approach as GRU-simple Several recent works (Lipton et al.][2016]Choi et al.]2015;Pham et al.]2016) use RNNs on EHR data to model diseases and to predict patient diagnosis from health care time series data with irregular time stamps or missing values, but none of them have explicitly attempted to capture and utilize the missing patterns into their RNNs via systematically modified network architectures.Choi et al. (2015) feeds medical codes along with its time stamps into GRU model to predict the next medical event. This feeding time stamps idea is equivalent to the baseline GRU-simple without feeding the masking, which we denote as GRU-simple (interval only). Pham et al.(2016) takes time stamps into LSTM model, and modify its forgetting gate by either time decay and parametric time both from time stamps. However, their non-trainable decay is not that flexible, and the parametric time also does not change RNN model structure and is similar to GRU-simple (interval only). In addition. neither of them consider missing values in time series medical records, and the time stamp input used in these two models is vector for one patient, but not matrix for each input variable of one patient as ours.Lipton et al.(2016) achieves their best performance on diagnosis prediction by feeding masking with zero-filled missing values. Their model is equivalent to GRU-simple without feeding the time interval, and no model structure modification is made for further capturing and utilizing missingness We denote their best model as GRU-simple (masking only). Conclusively, our GRU-simple baseline can be considered as a generalization from all related RNN models mentioned above and as shown in the experiments these GRU-simple variations have quite close performance.\nYt = exp{-max(0,Wyt+ by)}\nOur proposed GRU-D model incorporates two different trainable decays to utilize the missingness. directly with the input feature values and implicitly in the RNN states. First, for a missing variable. we use an input decay Yx to decay it over time toward the empirical mean (which we take as a default. configuration), instead of using the last observation as it is. Under this assumption, the trainable decay scheme can be readily applied to the measurement vector by.\n(1 - m\nwhere xd, is the last observation of the d-th variable (t' < t) and xd is the empirical mean of the dth variable. When decaying the input variable directly, we constrain W r to be diagonal, which effectively makes the decay rate of each variable independent from the others. Sometimes the input decay may not fully capture the missing patterns since not all missingness information can be represented in decayed input values. In order to capture richer knowledge from missingness we also have a hidden state decay Yn. in GRU-D. Intuitively, this has an effect of decaying the extracted features (GRU hidden states) rather than raw input variables directly. This is implemented by decaying the previous hidden state ht-1 before computing the new hidden state ht as\nTo validate GRU-D model and demonstrate how it utilizes informative missing patterns, in Figure we show the input decay (yx) plots and hidden decay (Yn) histograms for all the variables for predicting mortality on PhysioNet dataset. For input decay, we notice that the decay rate is almost constant for the majority of variables. However, a few variables have large decay which means thai the model relies less on the previous observations for prediction. For example, the changes in the variable values of weight, arterial pH, temperature, and respiration rate are known to impact the ICU patients health condition. The hidden decay histograms show the distribution of decay parameters related to each variable. We noticed that the parameters related to variables with smaller missing rate are more spread out. This indicates that the missingness of those variables has more impact on decaying or keeping the hidden states of the models.\nNotice that the decay term can be generalized to LSTM straightforwardly. In practical applications missing values in time series may contain useful information in a variety of ways. A better model should have the flexibility to capture different missing patterns. In order to demonstrate the capacity of our GRU-D model, we discuss some model variations in Appendix|A.2\nWe demonstrate the performance of our proposed models on one synthetic and two real-world health-care datasets'and compare it to several strong machine learning and deep learning approaches\nA summary statistics of the three datasets is shown in Appendix|A.3.\nSecond, as we see lots of missing patterns are informative in prediction tasks, the decay rate should be indicative of such patterns and benefits the prediction tasks. Furthermore, since the missing patterns. are unknown and possibly complex, we aim at learning decay rates from the training data rather than being fixed a priori. That is, we model a vector of decay rates y as.\nwhere W~ and b. are model parameters that we train jointly with all the other parameters of the GRU. We chose the exponentiated negative rectifier in order to keep each decay rate monotonically. decreasing in a reasonable range between 0 and 1. Note that other formulations such as a sigmoid function can be used instead, as long as the resulting decay is monotonic and is in the same range.\nht-1Yht O ht-1\n6:Cholesterdl 7:Troponin 48:Troponin 3: Albumin 0:ALP 1: ALT 2: AST 5: Bilirubin 16: Lactate 24:SaO2 30:WBC mr:0.9989 mr:0.9984 mr:0.9923 mr:0.9915 mr: 0.9888 mr:0.9885 mr: 0.9885 mr:0.9884 mr:0.9709 mr:0.9705 mr:0.9532 11: Glucose 19: Na 18: Mg 12: HCO3 4:BUN 7:Creatinine 22:Platelets 15: K 13:HCT 21:PaO2 20:PaCO2 mr:0.9528 mr: 0.9508 mr:0.9507 mr: 0.9507 mr: 0.9496 mr: 0.9493 mr: 0.9489 mr:0.9477 mr: 0.9338 mr:0.9158 mr: 0.9157 32:pH 9:FiO2 3:RespRate 10:GCS 26:Temp 31: Weight 29:Urine 17: MAP 8:DiasABP 25: SysABP 14: HR mr: 0.9118 mr: 0.883 mr: 0.8053 mr: 0.7767 mr: 0.6915 mr: 0.5452 mr: 0.5095 mr: 0.2141 mr: 0.2054 mr: 0.2052 mr: 0.1984\n(a) x-axis, time interval t between O and 24 hours; y-axis, value of decay rate /t between O and 1.\nCh estera 7:Tr ponin 8: onin 3:Alk umin 0 ALP 2 ST 5: Bi ubin 16: ctate 24: a02 30: VBC 10 9989 mr: 984 mr 9915 mr: 9888 mr 9885 mr: 9885 mr: 9884 mr 709 mr: 9705 mr: 532 MO 4: JN tinine telet 15 K 21 aO2 20 CO2 10 508 496 9493 9489 mr 158 nr 157 10 32: pH 02 Re pRate 26: 17 MAP 8:Di sABP 25: S SABP 14 IR 10 mr 9118 mr 883 ml mr: 41 mr: 2054 mr: 052 mr: 984 -0.3 0.3 0.3 0.3 -0.3 0.3 -0.3 0.3 0.3 -0.3 0.3 -0.3 0.3 -0.3 0.3 -0.3 0.3 0.3 0.3 couunt\nin classification tasks. We evaluate our models for different settings such as early prediction an different training sizes and investigate the impact of informative missingness..\nPhysioNet Challenge 2012 dataset (PhysioNet) This dataset, from PhysioNet Challenge 2012 (Sil- va et al.f 2012), is a publicly available collection of multivariate clinical time series from 8000 intensive care unit (ICU) records. Each record is a multivariate time series of roughly 48 hours and contains 33 variables such as Albumin, heart-rate, glucose etc. We used Training Set A subset in our experiments since outcomes (such as in-hospital mortality labels) are publicly available only for this subset. We conduct the following two prediction tasks on this dataset: 1) Mortality task: Predict whether the patient dies in the hospital. There are 554 patients with positive mortality label. We treat this as a binary classification problem. and 2) All 4 tasks: Predict 4 tasks: in-hospital mortality length-of-stay less than 3 days, whether the patient had a cardiac condition, and whether the patient was recovering from surgery. We treat this as a multi-task classification problem\nMIMIC-III dataset (MIMIC-III) This public dataset (Johnson et al.]2016) has deidentifiec clinical care data collected at Beth Israel Deaconess Medical Center from 2001 to 2012. It contain. over 58,000 hospita1 admission records. We extracted 99 time series features from 19714 admissior records for 4 modalities including input-events (fluids into patient, e.g., insulin), output-events (fluids out of the patient, e.g., urine), lab-events (lab test results, e.g., pH values) and prescription-events (drugs prescribed by doctors, e.g., aspirin). These modalities are known to be extremely useful fo monitoring ICU patients. We only use the first 48 hours data after admission from each time series We perform following two predictive tasks: 1) Mortality task: Predict whether the patient dies in the hospital after 48 hours. There are 1716 patients with positive mortality label and we perform binary classification. and 2) ICD-9 Code tasks: Predict 20 ICD-9 diagnosis categories (e.g., respiratory system diagnosis) for each admission. We treat this as a multi-task classification problem."}, {"section_index": "4", "section_name": "3.2 METHODS AND IMPLEMENTATION DETAILS", "section_text": "We categorize all evaluated prediction models into three following groups:\nFigure 4: Plots of input decay Yxt (top) and histrograms of hidden state decay Ynt (bottom) of all. 33 variables in GRU-D model for predicting mortality on PhysioNet dataset. Variables in green are lab measurements; variables in red are vital signs; mr refers to missing rate..\nGesture phase segmentation dataset (Gesture) This UCI dataset (Madeo et al. 2013) has multi variate time series features, regularly sampled and with no missing values, for 5 different gesticula- tions. We extracted 378 time series and generate 4 synthetic datasets for the purpose of understanding model behaviors with different missing patterns. We treat it as multi-class classification task\nNon-RNN Baselines (Non-RNN): We evaluate logistic regression (LR), support vecto machines (SVM) and Random Forest (RF) which are widely used in health care applications RNN Baselines (RNN): We take GRU-mean, GRU-forward, GRU-simple, and LSTM-mean (LSTM model with mean-imputation on the missing measurements) as RNN baselines.. Proposed Methods (Proposed): This is our proposed GRU-D model from Section2.2.\nThe non-RNN baselines cannot handle missing data directly. We carefully design experiments for non RNN models to capture the informative missingness as much as possible to have fair comparison with the RNN methods. Since non-RNN models only work with fixed length inputs, we regularly sample the time-series data to get a fixed length input and perform imputation to fill in the missing values Similar to RNN baselines, we can concatenate the masking vector along with the measurements anc feed it to non-RNN models. For PhysioNet dataset, we sample the time series on an hourly basis and propagate measurements forward (or backward) in time to fill gaps. For MIMIC-III datasel we consider two hourly samples (in the first 48 hours) and do forward (or backward) imputatior Our preliminary experiments showed 2-hourly samples obtains better performance than one-hourly samples for MIMIC-III. We report results for both concatenation of input and masking vectors (i.e SVM/LR/RF-simple) and only input vector without masking (i.e., SVM/LR/RF-forward). We use the scikit-learn (Pedregosa et al.|2011) for the non-RNN model implementation and tune the parameter. by cross-validation. We choose RBF kernel for SVM since it performs better than other kernels.\nFor RNN models, we use a one layer RNN to model the sequence, and then apply a soft-max regressor on top of the last hidden state hT to do classification. We use 100 and 64 hidden units in GRU-mean for MIMIC-III and PhysioNet datasets, respectively. All the other RNN models were constructed to have a comparable number of parameters||For GRU-simple, we use mean imputation for input as shown in Equation (1). Batch normalization (Ioffe & Szegedy]2015) and dropout (Srivastava et al. 2014) of rate 0.5 are applied to the top regressor layer. We train all the RNN models with the Adam optimization method (Kingma & Ba] 2014) and use early stopping to find the best weights on the validation dataset. All the input variables are normalized to be O mean and 1 standard deviation. We report the results from 5-fold cross validation in terms of area under the ROC curve (AUC score) We provide more detailed comparisons of RNN baselines and variations in Appendix A.3.4|and evaluations on multilayer RNN models in Appendix|A.3.5\nExploiting informative missingness on synthetic dataset As illustrated in Figure[1] missing pat terns can be useful in solving prediction tasks. A robust model should exploit informative missingnes. properly and avoid inducing nonexistent relations between missingness and predictions. To evaluate the impact of modeling missingness we conduct experiments on the synthetic Gesture datasets. We. process the data in 4 different settings with the same missing rate but different correlations betweer missing rate and the label. A higher correlation implies more informative missingness. Figure|5 show. the AUC score comparison of three GRU baseline models (GRU-mean, GRU-forward, GRU-simple. and the proposed GRU-D. Since GRU-mean and GRU-forward do not utilize any missingness (i.e masking or time interval), they perform similarly across all 4 settings. GRU-simple and GRU-D benefit from utilizing the missingness, especially when the correlation is high. Our GRU-D achieves. the best performance in all settings, while GRU-simple fails when the correlation is low. The results. on synthetic datasets demonstrates that our proposed model can model and distinguish useful missing. patterns in data properly compared with baselines..\nPrediction task evaluation on real datasetsWe evaluate all methods in Section 3.2|on MIMIC-II and PhysioNet datasets. We noticed that dropout in the recurrent layer helps a lot for all RNN model\nAppendix|A.3.2 compares all GRU models tested in the experiments in terms of model size\nMIMIC-II PhysioNet Models ICD-9 20 tasks All 4 tasks 0.5 0.8 GRU-mean 0.7070 0.001 0.8099 0.011 formance on Ges- GRU-forward 0.7077 0.001 0.8091 0.008 xis: average Pear- GRU-simple 0.7105 0.001 0.8249 0.010 missing rates and GRU-D -axis: AUC score. 0.7123 0.003 0.8370 0.012\n0.87 0.88 SVM-simple RF-simple GRU-mean GRU-forward GRU-simple GRU-D 0.81 O 0.83 0.75 GRU-mean GRU-forward 0.78 GRU-simple GRU-D O SVM-simple RF-simple 0.69 0.73 12 18 24 30 36 42 48 2k 10k 19.7k\nFigure 6: Performance for early predicting. mortality on MIMIC-III dataset. x-axis, # of hours after admission; y-axis, AUC score:. Dash line. RF-simple results for 48 hours\non both of the datasets, probably because they contain more input variables and training samples than. synthetic dataset. Similar toGal (2015), we apply dropout rate of 0.3 with same dropout samples at. each time step on weights W, U, V. Table|2 shows the prediction performance of all the models on mortality task. All models except for random forest improve their performance when they feed missingness indicators along with inputs. The proposed GRU-D achieves the best AUC score on both. datasets. We also conduct multi-task classification experiments for all 4 tasks on PhysioNet and 20. ICD-9 code tasks on MIMIC-III using all the GRU models. As shown in Table[1] GRU-D performs best in terms of average AUC score across all tasks and in most of the single tasks..\nTable 2: Model performances measured by AUC score (mean std) for mortality predictior"}, {"section_index": "5", "section_name": "3.4 DISCUSSIONS", "section_text": "Online prediction in early stage Although our model is trained on the first 48 hours data and. makes prediction at the last time step, it can be used directly to make predictions before it sees all the time series and can make predictions on the fly. This is very useful in applications such as health. care, where early decision making is beneficial and critical for patient care. Figure|6 shows the online prediction results for MIMIC-III mortality task. As we can see, AUC is around 0.7 at first 12 hours for. all the GRU models and it keeps increasing when longer time series is fed into these models. GRU-D. and GRU-simple, which explicitly handle missingness, perform consistently better than the other two. methods. In addition, GRU-D outperforms GRU-simple when making predictions given time series. of more than 24 hours, and has at least 2.5% higher AUC score after 30 hours. This indicates that. GRU-D is able to capture and utilize long-range temporal missing patterns. Furthermore, GRU-D. achieves similar prediction performance (i.e., same AUC) as best non-RNN baseline model with less. time series data. As shown in the figure, GRU-D has same AUC performance at 36 hours as the. best non-RNN baseline mode1 (RF-simple) at 48 hours. This 12 hour improvement of GRU-D over non-RNN baseline is highly significant in hospital settings such as ICU where time-saving critical. decisions demands accurate early predictions.\nFigure 7: Performance for predicting mortali- ty on subsampled MIMIC-III dataset. x-axis. subsampled dataset size; y-axis, AUC score.\nModels MIMIC-III PhysioNet LR-forward 0.7589 0.015 0.7423 0.011 SVM-forward 0.7908 0.006 0.8131 0.018 RF-forward 0.8293 0.004 0.8183 0.015 Non-RNN LR-simple 0.7715 0.015 0.7625 0.004 SVM-simple 0.8146 0.008 0.8277 0.012 RF-simple 0.8294 0.007 0.8157 0.013 LSTM-mean 0.8142 0.014 0.8025 0.013 GRU-mean 0.8192 0.013 0.8195 0.004 RNN GRU-forward 0.8252 0.011 0.8162 0.014 GRU-simple 0.8380 0.008 0.8155 0.004 Proposed GRU-D 0.8527 0.003 0.8424 0.012"}, {"section_index": "6", "section_name": "4 SUMMARY", "section_text": "In this paper, we proposed novel GRU-based model to effectively handle missing values in multivariate. time series data. Our model captures the informative missingness by incorporating masking and time. interval directly inside the GRU architecture. Empirical experiments on both synthetic and real-world. health care datasets showed promising results and provided insightful findings. In our future work. we will explore deep learning approaches to characterize missing-not-at-random data and we will. conduct theoretical analysis to understand the behaviors of existing solutions for missing values."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Yoshua Bengio and Francois Gingras. Recurrent neural networks for missing or asynchronous data Advances in neural information nrocessing. np. 395-401.1996\nZhengping Che, David Kale, Wenzhe Li, Mohammad Taha Bahadori, and Yan Liu. Deep computa tional phenotyping. In SIGKDD, 2015.\nKyunghyun Cho, Bart Van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holge. Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.\nEdward Choi, Mohammad Taha Bahadori, and Jimeng Sun. Doctor ai: Predicting clinical events via recurrent neural networks. arXiv preprint arXiv:1511.05942, 2015.\nJunyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014\nPedro J Garcia-Laencina, Jose-Luis Sancho-Gomez, and Anibal R Figueiras-Vidal. Pattern classifica tion with missing data: a review. Neural Computing and Applications, 19(2), 2010.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8) 1735-1780, 1997.\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training b reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.\nAEW Johnson, TJ Pollard, L Shen, L Lehman, M Feng, M Ghassemi, B Moody, P Szolovits, LA Celi and RG Mark. Mimic-iii, a freely accessible critical care database. Scientific Data, 2016..\nModel Scalability with growing data size In many practical applications, model scalability with large dataset size is very important. To evaluate the model performance with different training dataset size, we subsample three smaller datasets of 2000 and 10000 admissions from the entire MIMIC-III dataset while keeping the same mortality rate. We compare our proposed models with all GRU baselines and two most competitive non-RNN baselines (SVM-simple, RF-simple) and shows the prediction results in Figure 7] We observe that all models can achieve improved performance given more training samples. However, the improvements of non-RNN baselines are quite limited compared to GRU models, and our GRU-D model achieves the best results on the two larger datasets. These results indicate the performance gap between GRU-D and non-RNN baselines will continue to grow as more data become available.\nDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. 2014\nGeoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural network. for acoustic modeling in speech recognition: The shared views of four research groups. Signa Processing Magazine, IEEE, 29(6):82-97, 2012\nDiederik P Kingma and Max Welling. Auto-encoding variational bayes. In ICLR, 2013.\nZachary C Lipton, David C Kale, and Randall Wetzel. Directly modeling missing data in sequence with rnns: Improved classification of clinical time series. arXiv preprint arXiv:1606.04130, 2016\nRenata CB Madeo, Clodoaldo AM Lima, and Sarajane M Peres. Gesture unit segmentation using support vector machines: segmenting gestures from rest positions. In SAC, 2013..\nTomas Mikoloy, Martin Karafiat, Lukas Burget, Jan Cernocky, and Sanjeev Khudanpur. Recurrent neural network based language model. In INTERSPEECH, volume 2, pp. 3, 2010..\nF. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Pretten-. hofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research 12:2825-2830, 2011.\nTrang Pham, Truyen Tran, Dinh Phung, and Svetha Venkatesh. Deepcare: A deep dynamic memor. model for predictive medicine. In Advances in Knowledge Discovery and Data Mining. 2016.\nKira Rehfeld, Norbert Marwan, Jobst Heitzig, and Jirgen Kurths. Comparison of correlation analysis. techniques for irregularly sampled time series. Nonlinear Processes in Geophysics. 18(3). 2011.\nDanilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation anc approximate inference in deep generative models. In ICML, 2014.\nDonald B Rubin. Inference and missing data. Biometrika, 63(3):581-592, 1976\nNitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov Dropout: a simple way to prevent neural networks from overfitting. JMLR, 15(1), 2014..\nBrian J Wells, Kevin M Chagin, Amy S Nowacki, and Michael W Kattan. Strategies for handling missing data in electronic health record derived data. EGEMS. 1(3). 2013\nIan R White, Patrick Royston, and Angela M Wood. Multiple imputation using chained equations issues and guidance for practice. Statistics in medicine. 30(4):377-399. 2011.\nLi Zhou and George Hripcsak. Temporal reasoning with medical dataa review with emphasis on medical natural language processing. Journal of biomedical informatics, 40(2):183-202, 2007\nDebashis Mondal and Donald B Percival. Wavelet variance analysis for gappy time series. Annals o the Institute of Statistical Mathematics, 62(5):943-966, 2010.\nIvanovitch Silva, Galan Moody, Daniel J Scott, Leo A Celi, and Roger G Mark. Predicting in-hospital mortality of icu patients: The physionet/computing in cardiology challenge 2012. In CinC, 2012\nVolker Tresp and Thomas Briegel. A solution for missing data in recurrent neural networks with ar application to blood glucose prediction. NIPS, pp. 971-977, 1998."}, {"section_index": "8", "section_name": "A.2 GRU-D MODEL VARIATIONS", "section_text": "In this section, we will discuss some variations of GRU-D model, and also compare some related RNN models which are used for time series with missing data with the proposed model.\nFigure 8: Graphical illustrations of variations of prc osed GRU models"}, {"section_index": "9", "section_name": "A.2.1 GRU MODEL WITH DIFFERENT TRAINABLE DECAYS", "section_text": "The proposed GRU-D applies trainable decays on both input and hidden state transitions in order tc capture the temporal missing patterns explicitly. This decay idea can be straightforwardly generatec. to other parts inside the GRU models separately or jointly, given different assumptions on the impact. of missingness. As comparisons, we also describe and evaluate several modifications of GRU-. model.\nGRU-DI (Figure[8(a)) and GRU-DS (Figure[8(b)) decay only the input and only the hidden state by Equation (5) and (6), respectively. They can be considered as two simplified models of the proposed GRU-D. GRU-DI aims at capturing direct impact of missing values in the data, while GRU-DS captures more indirect impact of missingness. Another intuition comes from this perspective: if an input variable is just missing, we should pay more attention to this missingness; however, if an variable has been missing for a long time and keeps missing, the missingness becomes less important We can utilize this assumption by decaying the masking. This brings us the model GRU-DM shown in Figure|8(c) where we replace the masking md fed into GRU-D in by\nwhere the equality holds since md is either O or 1. We decay the masking for each variable indepen dently from others by constraining W m. to be diagonal."}, {"section_index": "10", "section_name": "A.2.2 GRU-IMP: GOAL-ORIENTED IMPUTATION MODEL", "section_text": "We may alternatively let the GRU-RNN predict the missing values in the next timestep on it own. When missing values occur only during test time, we simply train the model to predict the measurement vector of the next time step as a language model (Mikolov et al.|2010) and use it tc fill the missing values during test time. This is unfortunately not applicable for some time series applications such as in health care domains, which also have missing data during training.\nIn many time series applications, the pattern of missing variables in the time series is often informative and useful for prediction tasks. Here, we empirically confirm this claim on real health care dataset by investigating the correlation between the missingness and prediction labels (mortality and ICD. 9 diagnosis categories). We denote the missing rate for a variable d as px and calculate it by T. For each prediction task, we compute the Pearson correlation coefficient between px and label l. across all the time series. As shown in Figure[1] we observe that on MIMIC-III dataset the missing. rates with low rate values are usually highly (either positive or negative) correlated with the labels. The distinct correlation between missingness and labels demonstrates usefulness of missingness patterns in solving prediction tasks..\nMASK MASK MASK MASK IN IN IN IN OUT OUT OUT OUT (a) GRU-DI (b) GRU-DS (c) GRU-DM (d) GRU-IMP\n-m _ m+ Y Ymt YY\nInstead, we propose goal-oriented imputation model here called GRU-IMP, and view missing values. as latent variables in a probabilistic graphical model. Given a timeseries X, we denote all the missing. variables by Mx and all the observed ones by Ox. Then, training a time-series classifier with. missing variables becomes equivalent to maximizing the marginalized log-conditional probability of a correct label l, i.e., log p(l|O x ).\nThe exact marginalized log-conditional probability is however intractable to compute, and we instead maximize its lowerbound:\nlog p(l|Ox) = log `p(l|Mx,Ox)p(Mx|Ox) EMx~p(Mx|Ox)logp(l|Mx,Ox) M x\nwhere we assume the distribution over the missing variables at each time step is only conditioned on all the previous observations:\nT mt p(Mx|Ox)=I II p(x|x1:(t-1),m1:(t-1),0 t=11<d<D\nBy further assuming that xt ~ N (t, ?) , t = Yt O (Wxht-1 + bx) and t = 1, we can use a reparametrization technique widely used in stochastic variational inference (Kingma & Welling2013 Rezende et al.] 2014) to estimate the gradient of the lowerbound efficiently. During the test time, we simply use the mean of the missing variable, i.e., xt = t, as we have not seen any improvement from Monte Carlo approximation in our preliminary experiments. We view this approach as a goal-oriented. imputation method and show its structure in Figure[8(d)] The whole model is trained to minimize the. classification cross-entropy error llog_loss and we take the negative log likelihood of the observed values as a regularizer."}, {"section_index": "11", "section_name": "A.3.1 DATA STATISTICS", "section_text": "For each of the three datasets used in our experiments. we list the number of samples. the number of input variables, the mean and max number of time steps for all the samples, and the mean of all the variable missing rates in Table3\nTable 3: Dataset statistics\nMIMIC-III PhysioNet2012 Gesture # of samples (N) 19714 4000 378 # of variables (D). 99 33 23 Mean of # of time steps. 35.89 68.91 21.42 Maximum of # of time steps 150 155 31 Mean of variable missing rate 0.9621 0.8225 N/A\nAlthough this lowerbound is still intractable to compute exactly, we can approximate it by Monte Carlo method, which amounts to sampling the missing variables at each time as the RNN reads the input sequence from the beginning to the end, such that\nmxd+(1-m) )x\nN Tn d=1 md log p(x)|t, od) l = llogloss + d=1 mq n=1 t=1"}, {"section_index": "12", "section_name": "A.3.2 GRU MODEL SIZE COMPARISON", "section_text": "In order to fairly compare the capacity of all GRU-RNN models, we build each model in proper size so they share similar number of parameters. Table4 shows the statistics of all GRU-based models fo1 on three datasets. We show the statistics for mortality prediction on the two real datasets, and it's almost the same for multi-task classifications tasks on these datasets. In addition, having comparable number of parameters also makes all the models have number of iterations and training time close in the same scale in all the experiments.\nTable 4: Comparison of GRU model size in our experiments. Size refers to the number of hidder states (h) in GRU\nGesture MIMIC-III PhysioNet Models 18 input variables 99 input variables 33 input variables. Size # of parameters Size # of parameters Size # of parameters GRU-mean&forward 64 16281 100 60105 64 18885 GRU-simple 50 16025 56 59533 43 18495 GRU-D 55 16561 67 60436 49 18838"}, {"section_index": "13", "section_name": "A.3.3 MULTI-TASK PREDICTION DETAILS", "section_text": "The RNN models for multi-task learning with m tasks is almost the same as that for binary classi. fication. except that 1) the soft-max prediction layer is replaced by a fully connected layer with sigmoid logistic functions, and 2) a data-driven prior regularizer (Che et al.2015), parameterized by comorbidity (co-occurrence) counts in training data, is applied to the prediction layer to improve the. classification performance. We show the AUC scores for predicting 20 ICD-9 diagnosis categories. on MIMIC-III dataset in Figure[9] and all 4 tasks on PhysioNet dataset in Figure[10] The proposec GRU-D achieves the best average AUC score on both datasets and wins 11 of the 20 ICD-9 predictior. tasks.\nGRU-mean GRU-forward GRU-simple GRU-D 0.85 0.75 0.65 0.55 3 4 5 6 8 9 10 11 13 13 10 10 .20\nFigure 9: Performance for predicting 20 ICD-9 diagnosis categories on MIMIC-III dataset. x-axis, ICD-9 diagnosis category id; y-axis, AUC score..\n1GRU-mean GRU-forwardGRU-simpleGRU-D 0.9 0.8 0.7 0.6 mortality los < 3 surgery cardiac\nFigure 10: Performance for predicting all 4 tasks on PhysioNet dataset. mortality, in-hospital mortality; los< 3, length-of-stay less than 3 days; surgery, whether the patient was recovering from surgery; cardiac, whether the patient had a cardiac condition; y-axis, AUC score."}, {"section_index": "14", "section_name": "A.3.4 EMPIRICAL COMPARISON OF MODEL VARIATIONS", "section_text": "As a thorough empirical comparison, we test all GRU model variations mentioned in Appendix|A.2 along with the proposed GRU-D. These include 1) 4 models with trainable decays (GRU-DI, GRU-DS GRU-DM, GRU-IMP), and 2) two models simplified from GRU-simple (interval only and masking only). The results are shown in Table5 As we can see, GRU-D performs best among these models.\nTable 5: Model performances of GRU variations measured by AUC score (mean std) for mortality prediction."}, {"section_index": "15", "section_name": "A.3.5 EVALUATION ON MULTI-LAYER RNNS", "section_text": "We also conducted experiments on 2-layer RNN models to demonstrate the superiority of our proposed GRU-D models can be generalized to multi-layer RNNs. For all baseline and proposed GRU models we add one standard GRU layer on top of the baseline or proposed GRU layer. We tested models both with similar number of parameters to single layer models and with more parameters. As shown in Table[6|and[7] our GRU-D model consistently outperforms other baselines in all cases, and models with moderate size perform as good as larger models with more parameters. Compared with 1-layer RNNs, all models with deeper structures perform much better on the large MIMIC-III dataset but no better on the relative small PhysioNet dataset.\nTable 6: Comparison of multi-layer GRU models for mortality prediction on PhysioNet dataset. Size refers to the numbers of hidden states of 2 GRU layers..\nModels MIMIC-II PhysioNet GRU-simple (masking only) 0.8367 0.009 0.8226 0.010 Baselines GRU-simple (interval only) 0.8266 0.009 0.8125 0.005 GRU-simple 0.8380 0.008 0.8155 0.004 GRU-DI 0.8345 0.006 0.8328 0.008 GRU-DS 0.8425 0.006 0.8241 0.009 Proposed GRU-DM 0.8342 0.005 0.8248 0.009 GRU-IMP 0.8248 0.010 0.8231 0.005 GRU-D 0.8527 0.003 0.8424 0.012\nPhysioNet Models Size # of params. AUC score GRU-mean 40, 40 18643 0.8157 0.008 Similar GRU-forward 40,40 18643 0.8205 0.008 size GRU-simple 32, 32 18947 0.8159 0.007 GRU-D 34, 34 18599 0.8420 0.009 GRU-mean 64, 64 43651 0.8199 0.002 Larger GRU-forward 64, 64 43651 0.8112 0.035 size GRU-simple 43, 64 39250 0.8208 0.009 GRU-D 49, 64 40739 0.8363 0.013\nTable 7: Comparison of multi-layer GRU models for mortality prediction on MIMIC-III dataset. Size refers to the numbers of hidden states of 2 GRU layers..\nMIMIC-III Models Size # of params. AUC score GRU-mean 66, 66 59271 0.9538 0.005 Similar GRU-forward 66, 66 59271 0.9441 0.005 size GRU-simple 46,46 60355 0.9527 0.005 GRU-D 52, 52 60989 0.9606 0.002 GRU-mean 100, 64 91747 0.9523 0.006 GRU-forward 100, 64 91747 0.9443 0.003 GRU-simple 56, 64 82771 0.9520 0.003 Larger GRU-D 67, 64 85775 0.9604 0.003 size GRU-mean 100, 128 148067 0.9539 0.006 GRU-forward 100, 128 148067 0.9457 0.005 GRU-simple 56, 128 130643 0.9523 0.003 GRU-D 67, 128 135759 0.9618 0.002"}] |
HJrDIpiee | [{"section_index": "0", "section_name": "INVESTIGATING RECURRENCE AND ELIGIBILITY TRACES IN DEEP O-NETWORKS", "section_text": "Jean Harb, Doina Precup\n-P School of Computer Science McGill University\njharb,dprecup}@cs.mcgill.ca"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Eligibility traces in reinforcement learning are used as a bias-variance trade-off and can often speed up training time by propagating knowledge back over time- steps in a single update. We investigate the use of eligibility traces in combination with recurrent networks in the Atari domain. We illustrate the benefits of both recurrent nets and eligibility traces in some Atari games, and highlight also the importance of the optimization used in the training."}, {"section_index": "2", "section_name": "1 INTRODUCTION", "section_text": "Deep reinforcement learning has had many practical successes in game playing (Mnih et al (2015),Silver et al.(2016)) and robotics (Levine & Abbeel(2014)). Our interest is in further explor ing these algorithms in the context of environments with sparse rewards and partial observability. Tc this end, we investigate the use of two methods that are known to mitigate these problems: recurren networks, which provide a form of memory summarizing past experiences, and eligibility traces which allow information to propagate over multiple time steps. Eligibility traces have been showr empirically to provide faster learning (Sutton & Barto (2017), in preparation) but their use with dee RL has been limited so far (van Seijen & Sutton(2014),Hausknecht & Stone(2015)). We provid experiments in the Atari domain showing that eligibility traces boost the performance of Deep RI We also demonstrate a surprisingly strong effect of the optimization method on the performance o the recurrent networks.\nThe paper is structured as follows. In Sec.2lwe provide background and notation needed for the paper. Sec.3|describes the algorithms we use. In sec. 4|we present and discuss our experimental results. In Sec. 5|we conclude and present avenues for future work.\nA Markov Decision Process (MDP) consists of a tuple (S, A, r, P, y), where S is the set of states. A is the set of actions, r : S A +> R is the reward function, P(s'[s, a) is the transition function. (giving the next state distribution, conditioned on the current state and action), and y E [0, 1) is the discount factor. Reinforcement learning (RL) (Sutton & BartoJ|1998) is a framework for solving unknown MDPs, which means finding a good (or optimal) way of behaving, also called a policy. RI. works by obtaining transitions from the environment and using them, in order to compute a policy. that maximizes the expected return, given by E[t=o 'rt].\nThe state-value function for a policy : S A -> [0, 1], V(s), is defined as the expected return. obtained by starting at state s and picking actions according to . State-action values Q(s, a) are similar to state values, but conditioned also on the initial action a. A policy can be derived from the Q values by picking always the action with the best estimated value at any state..\nMonte Carlo (MC) and Temporal Difference (TD) are two standard methods for updating the value function from data. In MC, an entire trajectory's return is used as the target value of the current\nX MC error = ) rt+i - I i=0\nQ-learning is an RL algorithm that allows an agent to learn by imagining that it will take the best possible action in the following step:\nTD error = rt + y max Q(st+1, a) - Q(st, at)"}, {"section_index": "3", "section_name": "2.1 ELIGIBILITY TRACES", "section_text": "Eligibility traces are a fundamental reinforcement learning mechanism which allows a trade-off between TD and MC. MC methods suffer from high variance, as many trajectories can be taker from any given state and stochasticity is often present in the MDP. TD suffers from high bias, as it updates values based on its own estimates.\nUsing eligibility traces allows one to design algorithms that cover the middle-ground between MC. and TD. The central notion for these are n-step returns, which provide a way of calculating the target. by using the value estimate for the state which occurs n steps in the future (compared to the current State):\nn-1 t+i+Y nV(St+n i=0\nWhen n is 1, the results is the TD target, and taking n -> oo yields the MC target\nEligibility traces use a geometric weighting of these n-step returns, where the weight of the k-step return is A times the weight of the k - 1-step return. Using a A = 0 reduces to using TD, as al n-steps for n > 1 have a weight of 0. One of the appealing effects of using eligibility traces is thai a single update allows states many steps behind a reward signal to receive credit. This propagates knowledge back at a faster rate, allowing for accelerated learning. Especially in environments where rewards are sparse and/or delayed, eligibility traces can help assign credit to past states and actions Without traces, seeing a sparse reward will only propagate the value back by one step, which in turn needs to be sampled to send the value back a second step, and so on.\nThis way of viewing eligibility traces is called the forward view, as states are looking ahead at the. rewards received in the future. The forward view is rarely used, as it requires a state to wait for the. future to unfold before calculating an update, and requires memory to store the experience. There i an equivalent view called the backward view, which allows us to calculate updates for every previous. state as we take a single action. This requires no memory and lets us perform updates without havin. to wait for the future. However, this view has had limited success in the neural network setting as. it requires using a trace on each neuron of the network, which tend to be dense and heavily used a each step resulting in noisy signals. For this reason, eligibility traces aren't heavily used when using deep learning, despite their potential benefits.\nQ(X) is a variant of Q-learning where eligibility traces are used to calculate the TD error. As men tioned previously, the backwards view of traces is traditionally used.\nTD error = rt + yV(st+1) - V(st\nThis is an instance of off-policy learning, in which the agent gathers data with an exploratory policy. which randomizes the choice of action, but updates its estimates by constructing targets according. to a differnet policy (in this case, the policy that is greedy with respect to the current value estimates\n8 8 i-1 R=(1-xR=(1--1 i=0 i=1 j=0"}, {"section_index": "4", "section_name": "2.2 DEEP O-NETWORKS", "section_text": "Mnih et al.[(2015) introduced deep Q-networks (DQN), one of the first successful reinforcement learning algorithms that use deep learning for function approximation in a way general enough which is applicable to a variety of environments. Applying it to a set of Atari games, they used. a convolutional neural network (CNN) which took as input the last four frames of the game, and. output Q-values for each possible action.\nEquation |6|shows the DQN cost function, where we are optimizing the 0 parameters. The 0 parameters represent frozen Q-value weights which are update at a chosen frequency\nAs introduced in Hausknecht & Stone(2015), deep recurrent Q-networks (DRQN) are a modifica tion on DQN, where single frames are passed through a CNN, which generates a feature vector that. is then fed to an RNN which finally outputs Q-values. This architecture gives the agent a mem- ory, allowing it to learn long-term temporal effects and handle partial observability, which is the case in many environments. The authors showed that randomly blanking out frames was difficult to overcome for DQN, but that DRQN learned to handle without issue..\nTo train DRQN, they proposed two variants of experience replay. The first was to sample entire trajectories and run the RNN from end to end. However this is very computationally demanding as some trajectories can be over 1oooo steps long. The second alternative was to sample sub. trajectories instead of single transitions. This is required as the RNN needs to fill its hidden state and to allow it to understand the temporal aspect of the data.."}, {"section_index": "5", "section_name": "2.3 OPTIMIZERS", "section_text": "Stochastic gradient descent (SGD) is generally the algorithm used to optimize neural networks However, some information is lost during the process as past gradients might signal that a weight drastically needs to change, or that it is oscillating, requiring a decrease in learning rate. Adaptive SGD algorithms have been built to use this information.\nhe rest of the paper. when mentioning RMSprop . we'll be referring to this version\nFinally,Kingma & Ba (2014) introduced Adam, which is essentially RMSprop coupled with Nes terov momentum, along with the running averages being corrected for bias. We have a term for the. rate of momentum of each of the running averages. To calculate the update with Adam, we start with the updating the averages m = 1m + (1 1)V0, v = 2v + (1 - 2)V02, the correct their biases. m m\nA few versions of Q() exist, but the most used one is Watkins's Q(). As Q-learning is off-policy. the sequence of actions used in the past trajectory used to calculate the trace might be different from the actions that the current policy might take. In that case, one should not be using the trajectory past the point where actions differ. To handle such a case, Watkins's Q() sets the trace to O if the action that the current policy would select is different from the one used in the past.\nL(st,at|0) = (rt + y maxQ(st+1,a'|0-) - Q(st,at|0))\nRMSprop (Tieleman & Hinton(2012)), uses a geometric averaging over gradients squared, and divides the current gradient by its square root. To perform RMSprop, first we calculate the averaging as g = g + (1 - )V02 and then update the parameters 0 0 + a- Ve\nDQN (Graves(2013)) introduced a variant of RMSprop where the gradient is instead divided by the standard deviation of the running average. First we calculate the running averages m = Bm + (1 - 3)V0 and g = g + (1 - )V02, and then update the parameters using 0 0 + a V e In - m2+\n13 s2 Q(St, at) Q(St+1, at+1) Q(St+2, at+2) Q(St+3, at+3) Q(St+4, at+4) T St-3 St-2 St-1 St St+1 St+2 St+3 St+4\nFigure 1: This graph illustrates how a sample from experience replay is used in training. We use a number of frames to fill the hidden state of the RNN. Then, for the states used for training, we have the RNN output the Q-values. Finally, we calculate each n-step return and weight them according to A, where the arrows represent the forward view of each trace. All states are passed though the CNN before entering the RNN.\nAs explained, the forward view of eligibility traces can be useful, but is computationally demanding. in terms of memory and time. One must store all transitions and apply the neural network to each state in the trajectory. By using DRQN, experience replay is already part of the algorithm, which. removes the memory requirement of the traces. Then, by training on sub-trajectories of data, the. states must be run through the RNN with all state values as the output, which eliminates the compu. tational cost. Finally, all that's left to use eligibility traces is simply to calculate the weighted sum. of the targets, which is very cheap to do..\nWe tested the algorithms on two Atari 2600 games, part of the Arcade Learning Environment (Belle-. mare et al.(2012)), Pong and Tennis. The architecture used is similar to the one used in Hausknecht. & Stone(2015). The frames are converted to gray-scale and re-sized to 84x84. These are then fed. to a CNN with the first layer being 32 8x8 filters and a stride of 4, followed by 64 4x4 filters with a stride of 2, then by 64 3x3 filters with a stride of 1. The output of the CNN is then flattened before. being fed to a single dense layer of 512 output neurons, which is finally fed to an LSTM (Hochreiter. & Schmidhuber(1997)) with 512 cells. We then have a last linear layer that takes the output of the recurrent layer to output the Q-values. All layers before the LSTM are activated using rectified. linear units (ReLU).\nAs mentioned in subsection 2.2.1] we also altered experience replay to sample sub-trajectories. We use backprop through time (BPTT) to train the RNN, but only train on a sub-trajectory of experience. In runtime, the RNN will have had a large sequence of inputs in its hidden state, which can be problematic if always trained with an empty hidden state. Like in Lample & Singh Chaplot (2016). we therefore sample a slightly longer length of trajectory and use the first m states to fill the hidden state. In our experiments, we selected trajectory lengths of 32, where the first 10 states are used as filler and the remaining 22 are used for the traces and TD costs. We used a batch size of 4..\ns2 Q(St, at) Q(St+1, at+1) Q(St+2, at+2) Q(St+3, at+3) Q(St+4, at+4) St-3 St-2 St-1 St St+1 St+2 St+3 St+4\nIn this section we analyze the use of eligibility traces when training DRQN and try both RMSprop. and Adam as optimizers. We only tested the algorithms on fully observable games as to compare the learning capacities without the unfair advantage of having a memory, as would be the case on. partially observable environments.\nAll experiments using eligibility traces use X = 0.8. Furthermore, we use Watkins's Q(). To limit computation costs of using traces, we cut the trace off once it becomes too small. In our experiments. we choose the limit of 0.01. which allows the traces to affect 21 states ahead (when = 0.8). We\ncalculate the trace for every state in the trajectory, except for a few in the beginning, use to fill in the hidden state of the RNN.\nTesting phases are consistent across all models, with the score being the average over each game played during 125000 frames. We also use an e of 0.05 for action selection..\nChoose k as number of trace steps and m as RNN-fller steps Initialize weights 0, experience replay D 0- + 0 S S0 repeat Initialize RNN hidden state to 0. repeat Choose a according to e-greedy policy on Q(s, a|0) Take action a in s, observe s', r Store s, a, r, s' in Experience Replay Sample 4 sub-trajectories of m + k sequential transitions (s, a, r, s') from D s' is terminal, r + y max Q(s', a|0-) Otherwise foreach transition sampled do at = arg maxa(st, a|0) Otherwise end for l from 0 to k - 1 do R+=[$1 (II, ^+i) R{$+1)]/[1(II; ^+)] end Perform gradient descent on(R^-Q(s,a|o))2 Every 10000 steps 0- 0 S s until s' is terminal until training complete\nJromoto k 1 do R+=s=IIi=t+i /[k=l(IIi=t+i) t+s\nAlgorithm 1: Deep Recurrent Q-Networks with forward view eligibility traces on Atari. The eli. gibility traces are calculated using the n-step return function R(n) for time-step t was described in section2.1"}, {"section_index": "6", "section_name": "4 EXPERIMENTAL RESULTS", "section_text": "We describe experiments in two Atari games: Pong and Tennis. We chose Pong because it permits quick experimentation, and Tennis because it is one of the games that has proven difficult in all published results on Atari.\nFirst. we tested an RNN model both with X = 0 and X = 0.8, trained with RMSprop. Figure2|show. that the model without a trace ( = 0) learned at the same rate as DQN, while the model with trace. ( = 0.8) learned substantially faster and with more stability, without exhibiting any epochs witl depressed performance. This is probably due to the eligibility traces propagating rewards back by many steps in a single update. In Pong, when the agent hits the ball, it must wait several time-steps before the ball gets either to or past the opponent. Once this happens, the agent must assign the credit of the event back to the time when it hit the ball, and not to the actions performed after the ball had already left. The traces clearly help send this signal back faster.\nWhen using RMSprop, we used a momentum of O.95, an epsilon of O.01 and a learning rate of 0.00025. When using Adam, we used a momentum of gradients of 0.9, a momentum of squared gradients of 0.999, an epsilon of 0.001 and a learning rate of 0.00025..\nFigure 2: Test scores on Pong by training models with RMSprop vs Adam\nWe then tested the same models but using Adam as the optimizer instead of RMSprop. All models learn much faster with this setting. However, the model with no trace gains significantly more than the model with the trace. Our current intuition is that some hyper-parameters, such as the frozen network's update frequency, are limiting the rate at which the model can learn. Note also that the DQN model also learns faster with Adam as the optimizer, but remains quite unstable, in comparison with the recurrent net models.\nFinally, the results in Table[1|show that both using eligibility traces and Adam provide performance improvements. While training with RMSProp, the model with traces gets to near optimal perfor mance more than twice as quickly as the other models. With Adam, the model learns to be optimal in just 6 epochs.\nRMSprop Adam DON 23 12 RNN A = 0 28 8 RNN A = 0.8 10 6\nTable 1: Number of epochs before getting to 18 points in Pong. We chose 18 points as the thresh old because it represents a near-optimal strategy. Testing is performed with a 5% e-greedy policy stopping the agent from having a perfect score."}, {"section_index": "7", "section_name": "4.2 TENNIS", "section_text": "The second Atari 2600 game we tested was Tennis. A match consists of only one set, which is won by the player who is the first to win 6 \"games\" (as in regular tennis). The score ranges from 24 to 24, given as the difference between the number of balls won by the two players.\nAs in Pong, we first tried an RNN trained with RMSprop and the standard learning rate of O.00025 both with and without eligibility traces (using again X = 0.8 and X = 0). Figure3|shows that both RNN models learned to get optimal scores after about 50 epochs. This is in contrast with DQN, which never seems to be able to pass the 0 threshold, with large fluctuations ranging from -24 to 0. After visually inspecting the games played in the testing phase, we noticed that the DQN agent gets stuck in a loop, where it exchanges the ball with the opponent until the timer runs out. In such a case, the agent minimizes the number of points scored against, but never learns to beat the opponent. The score fluctuations depend on how few points the agent allows before entering the loop. We suspect that the agent gets stuck in this policy because the reward for trying to defeat the opponent is delayed, waiting for the ball to reach the opponent and get past it. Furthermore, the experiences of getting a point are relatively sparse. Together, it makes it difficult to propagate the reward back to the action of hitting the ball correctly.\nRMSprop Adam 20 20 15 15 10 1.0 RNN trace=0.0 SCores SCeres RNN trace=0.8 DQN 10 10 15 -15 20 20 ..... -25 25 0 5 10 15 20 25 30 0 5 10 15 20 25 30 epochs epochs\nWe also notice that both the RNN with and without eligibility traces manage to learn a near-optima policy without getting stuck in the bad policy. The RNN has the capacity of sending the signal bac to past states with BPTT, allowing it to do credit assignment implicitly, which might explain thei ability to escape the bad policy. Remarkably, this is the only algorithm that succeeds in gettin, near-optimal scores on Tennis, out of all variants of DQN (Mnih et al.(2015), Munos et al.(2016 Wang et al.(2015), Mnih et al.(2016), Schaul et al.(2015)), which tend to get stuck in the polic of delaying. The model without traces learned at a faster pace than the one with traces, arriving tc a score of 20 in 45 epochs as opposed to 62 for its counterpart. It's possible that the updates fo model with traces were smaller, due to the weighting of target values, indirectly leading to a lowe learning rate. We also trained the models with RMSprop and a higher learning rate of O.001. Thi led to the model with traces getting to 20 points in just 27 epochs, while the model without trace lost its ability to get optimal scores and never passed the O threshold.\nFigure 3: Test scores on Tennis comparing RMSprop and Adam\nTable 2: Number of epochs before getting to 20 points in Tennis. N/A represents the inability tc reach such a level.\nWe then tried using Adam as the optimizer, with the original learning rate of O.0o025. Both RNN models learned substantially faster than with RMSprop, with the RNN with traces getting to near-. optimal performance in just 13 epochs. With Adam, the gradient for the positive TD is stored in the momentum part of the equation for quite some time. Once in momentum, the gradient is part of many updates, which makes it enough to overtake the safe strategy. We also notice that the model. with traces was much more stable than its counterpart. The model without traces fell back to the. policy of delaying the game on two occasions, after having learned to beat the opponent. Finally. we trained DQN with Adam, but the model acted the same way as DQN trained with RMSprop.."}, {"section_index": "8", "section_name": "DISCUSSION AND CONCLUSION", "section_text": "In this paper, we analyzed the effects of using eligibility traces and different optimization functions We showed that eligibility traces can improve and stabilize learning and using Adam can strongly accelerate learning\nAs shown in the Pong results, the model using eligibility traces didn't gain much performance from using Adam. One possible cause is the frozen network. While it has a stabilizing effect in DQN. by stopping policies from drastically changing from a single update, it also stops newly learned values from being propagated back. Double DQN seems to partially go around this issue, allowing\nRMSprop lr=0.00025 RMSprop lr=0.001 Adam 30 30 30 20 20 20 L0 10 10 ScCress Cooess Scoress -10 -20 -30 -30 -30 0 10 20 30 40 50 60 70 80 0 5 10 15 20 25 30 0 5 10 15 20 25 30 epochs epochs epochs RNN trace=0.0 e RNN trace=0.8 A DQN\nthe policy of the next state to change, while keeping the values frozen. In future experiments we must consider eliminating or increasing the frozen network's update frequency. It would also be interesting to reduce the size of experience replay, as with increased learning speed, old observations can become too off-policy and barely be used in eligibility traces.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8) 1735-1780, 1997.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprin arXiv:1412.6980, 2014.\nGuillaume Lample and Devendra Singh Chaplot. Playing fps games with deep reinforcement learn ing. arXiv preprint arXiv:1609.05521, 2016\nRemi Munos, Tom Stepleton, Anna Harutyunyan, and Marc G Bellemare. Safe and efficient off policy reinforcement learning. arXiv preprint arXiv:1606.02647, 2016.\nDavid Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche. Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering. the game of go with deep neural networks and tree search. Nature. 529(7587):484-489. 2016.\nRichard S Sutton and Andrew G Barto. Reinforcement learning: An introduction, volume 1. MI7 press Cambridge, 1998.\nSergey Levine and Pieter Abbeel. Learning neural network policies with guided policy search under unknown dynamics. In Advances in Neural Information Processing Systems, pp. 1071-1079 2014.\nTijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURsERA: Neural Networks for Machine Learning, 4(2) 2012."}] |
r1WUqIceg | [{"section_index": "0", "section_name": "IMPROVING STOCHASTIC GRADIENT DESCENT WITH FEEDBACK", "section_text": "Jayanth Koushik & Hiroaki Hayashi\nLanguage Technologies Institute Carnegie Mellon University Pittsburgh, PA 15213, USA\nIn this paper we propose a simple and efficient method for improving stochastic. gradient descent methods by using feedback from the objective function. The. method tracks the relative changes in the objective function with a running average. and uses it to adaptively tune the learning rate in stochastic gradient descent. We specifically apply this idea to modify Adam, a popular algorithm for training deep. neural networks. We conduct experiments to compare the resulting algorithm. which we call Eve, with state of the art methods used for training deep learning. models. We train CNNs for image classification, and RNNs for language modeling. and question answering. Our experiments show that Eve outperforms all other. algorithms on these benchmark tasks. We also analyze the behavior of the feedback mechanism during the training process.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Despite several breakthrough results in the last few years, the training of deep learning models remains a challenging problem. This training is a complex, high-dimensional, non-convex, stochasti optimization problem which is not amenable to many standard methods. Currently, the most commor approach is to use some variant of stochastic gradient descent. Many extensions have been proposec to the basic gradient descent algorithm - designed to handle specific issues in the training of deep learning models. We review some of these methods in the next section.\nAlthough variants of simple stochastic gradient descent work quite well in practice, there is still room. for improvement. This is easily evidenced by the existence of numerous methods to simplify the optimization problem itself like weight initialization techniques and normalization methods..\nIn this work, we seek to improve stochastic gradient descent with a simple method that incorporates feedback from the objective function. The relative changes in the objective function indicate progress of the optimization algorithm. Our main hypothesis is that incorporating information about this change into the optimization algorithm can lead to improved performance - quantified in terms of the progress rate. We keep a running average of the relative changes in the objective function and use it to divide the learning rate. When the average relative change is high, the learning rate is reduced This can improve the progress if, for example, the algorithm is bouncing around the walls of the objective function. Conversely, when the relative change is low, the learning rate is increased. This can help the algorithm accelerate through flat areas in the loss surface. As we discuss in the nex section, such \"plateaus\"' pose a significant challenge for first order methods and can create the illusion of local minima.\nWhile our method is general i.e. independent of any particular optimization algorithm, in this work we specifically apply the method to modify Adam (Kingma & Ba2014), considered to be the state of the art for training deep learning models. We call the resulting algorithm Eve and design experiments to compare it with Adam, as well as other popular methods from the literature.\nThe paper is organized as follows. In Section[2] we review recent results related to the optimization of deep neural networks. We also discuss some popular algorithms and their motivations. Our general"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "To tackle the saddle point problem,Dauphin et al.(2014) propose a second order method that fixes the issue with Newton's method. Their algorithm builds on considering the behavior of Newton's method near saddle points. Newton's method rescales gradients in each eigen-direction with the corresponding inverse eigenvalue. However, near a saddle point, negative eigenvalues can cause the method to move towards the saddle point. Based on this observation, the authors propose using the absolute values of the eigenvalues to rescale the gradients. This saddle-free Newton method is backed by theoretical justifications and empirical results; however due to the computational requirements second order methods are not very suitable for training large scale models. So we do not compare with such approaches in this work.\nWe instead focus on first order methods which only rely on the gradient information. A key issue in training deep learning models is that of sparse gradients. To handle this, Adagrad (Duchi et al.|2011 adaptively changes the learning rate for each parameter, performing larger updates for infrequently updated parameters. However its update rule causes the learning rate to monotonically decrease which eventually stalls the algorithm. Adadelta (Zeiler2012) and RMSProp (Tieleman & Hinton 2012) are two extensions that try to fix this issue. Finally, a closely related method, and the base for our algorithm Eve (introduced in the next section), is Adam (Kingma & Ba] 2014). Adam incorporates the advantages of both Adagrad and RMSProp - and it has been found to work quite well in practice. Adam uses a running average of the gradient to determine the direction of descent and scales the learning rate with a running average of the gradient squared. The authors of Adam also propose an extension based on the infinity norm, called Adamax. In our experiments, we compare Eve with both Adam and Adamax."}, {"section_index": "3", "section_name": "3.1 ASSUMPTION", "section_text": "We do need to make an assumption about the objective function f. We assume that the minimum. value of f over its domain is known. While this is true for loss functions encountered in machine learning (like mean squared error or cross entropy), it does not hold if the objective function also includes regularization terms (L1, L2 etc.). In all our experiments, we used dropout for regularization. which is not affected by this assumption. Finally, to simplify notation we assume that the minimum. has been subtracted from the objective function i.e. the minimum has been made O..\nmethod, and the specific algorithm Eve are discussed in Section[3] Then in Section4] we show that Eve consistently outperforms other methods in training convolutional neural networks (CNNs), and recurrent neural networks (RNNs). We also look in some detail, at the behavior of our method in the simple case of convex non-stochastic optimization. Finally we conclude in Section|5\nThere has been considerable effort to understand the challenges in deep learning optimization. ntuitively, it seems that the non-convex optimization is made difficult by the presence of several ooor local optima. However, this geometric intuition proves to be inadequate in reasoning about he high-dimensional case that arises with training deep learning models. Various empirical and. heoretical results (Bray & Dean2007| Dauphin et al.[2014) have indicated that the problem in high. dimensions arises not from local minima, but rather from saddle points. Moreover, a recent papei. Kawaguchi!2016) proved (for deep linear networks, and under reasonable assumptions, also for. leep non-linear networks) that all local minima in fact achieve the same value, and are optimal. The. work also showed that all critical points which are not global minima are saddle points. Saddle points. can seriously hamper the progress of both first and second order methods. Second order methods like Newton's method are actually attracted to saddle points and are not suitable for high dimensional non-convex optimization. First order methods can escape from saddle points by following directions f negative curvature. However, such saddle points are usually surrounded by regions of small curvature - plateaus. This makes first order methods very slow near saddle points and can create the llusion of a local minimum.\nmt1mt-1+11)g mt(1-t) mt Vt2Vt-1+(12)9t Ut(1-) Ut if t > 1 then if f(0t-1) > ft-2 then Otk+1 tK+1 else Ot. K+1 7t k+1 end if Ct < min max ft-1Ctft-2 |ft-1-ft-2| rt+ min{ft-1,ft-2} dt3dt-1+(13)r else ft-1f0t-1 dt1 end if 0t0t-1-QdtVUt+e mt while 1rn 0t\nk+1 end if Ct min max ft-1Ctft-2 [ft-1-ft-2| rt r min{ft-1,ft-2} dt3dt-1+13)r else ft-1f(0t-1) dt1 end if. 0t0t-1- nd while."}, {"section_index": "4", "section_name": "3.2 ALGORITHM", "section_text": "The main component of our proposed method is a feedback term that captures the relative change in the objective value. Let ft-2 and ft-1 denote the values of the objective function at time steps t - 2 ft - 1 ft -1 f t - 2 than 1 i.e. it captures both relative increase and decrease. We compute a running average using these relative changes to get a smoother estimate. Specifically, we take dj = 1, and for t > 1 define d as dt = Bdt-1 + (1 )rt. Here E 0, 1) is a decay rate - large values correspond to a slowly changing dt, and vice versa. This simple expression can, however, blow up and lead to instability. To handle this issue, we use a thresholding scheme. A simple thing to do would be to clip dt as min{max{k, dt}, K} for some suitable 0 < k < K. But we found this to not work very well in practice due to the abrupt nature of the clipping. Instead we indirectly clip dt by smoothly tracking\nAlgorithm 1 Eve: Adam with feedback. Parameters carried over from Adam have the same default values: = 10-3, 1 = 0.9, 2 = 0.999, e = 10-8. For parameters specific to our method, we recommend default values 3 = 0.999, k = 0.1, K = 10. Wherever applicable, products are. elementwise.\nFigure 1: Training loss for convolutional neural networks on CIFAR10 and CIFAR100. The vertical axis is shown in log scale. In both cases, our proposed method achieves the best performance.\nft-1 > ft-2. This smooth tracking has the additional advantage of making dt less susceptible to th high variability that comes with training using minibatches.\nOnce dt has been computed, it can be used to modify any gradient descent algorithm by modifying the learning rate a as at = a/dt. Large values of dt, caused by large changes in the objective function will lead to a smaller effective learning rate. Similarly, small values of d will lead to larger effective learning rate. Since we start with do = 1, the initial updates will closely follow tha of the base algorithm. In the next section, we will look at how dt evolves during the course of ar experiment to get a better understanding of how it affects the training.\nWe note again that our method is independent of any particular gradient descent algorithm. However for this current work, we specifically focus on applying the method to Adam (Kingma & Ba] 2014 This modified algorithm, which we call Eve, is shown in Algorithm[1] We modify the final Adan update by multiplying the denominator /vt with dt. In addition to the hyperparameters in Adam, we introduce 3 new hyperparameters 3, k, and K. In all our experiments we use the values 3 = 0.999 k = 0.1. and K = 10, which we found to work well in practice."}, {"section_index": "5", "section_name": "4 EXPERIMENTS", "section_text": "Now we evaluate our proposed method by comparing Eve with several state of the art algorithms for optimizing deep learning models|' In all experiments, we used ReLU activations, and initialized weights according to the scheme proposed by Glorot & Bengio(2010). We used minibatches of size. 128, and linear decay for the learning rate: Qt = a/(1 + yt) (y is the decay rate, picked by searching. over a range of values).\nIn the figures, SGD refers to vanilla stochastic gradient descent, and SGD Nesterov refers to stochastic gradient descent with Nesterov momentum (Nesterov1983) where we set the momentum to 0.9 in all experiments.\nWe first trained a 5 layer convolutional neural network for 10-way classification of images from th CIFAR10 dataset (Krizhevsky & Hinton 2009). The model consisted of 2 blocks of 3x3 convolutiona layers each followed by 2x2 max pooling and 0.25 dropout (Srivastava et al.]2014). The first blocl\n'Full technical details of the experiments and additional results are available at ht tps : / / git hub . con jayanthkoushik/sgd-feedback\n1.5 100 4.3 100 Eve Eve Adam Adam Adamax Adamax 5.0 x 10-1 SGD Nesterov 2.6 100 SGD Nesterov RMSprop SGD 1.6 10-1 Adagrad SSO 1.6 100 Adadelta 5.4 10-2 9.8 10-1 1.8 102 6.0 10-1 0 100 200 300 400 500 0 20 40 60 80 100 Epoch Epoch (a) CIFAR10 (b) CIFAR100\nFigure 2: Behavior of the tuning coefficient dt during the experiment shown in Figure|1a There is an overall trend of acceleration followed by decay, but also more fine-grained behavior as indicated by the bottom-right plot.\n101 100 10-1 SSO 10-2 10-3 10-4 0 50000 100000 150000 200000 Iteration\nFigure 3: Minibatch losses for Eve during the CNN experiment on CIFAR10 (Figure|1a). T variance in the losses increases throughout the training.\ncontained 2 layers with 32 filters each, and the second block contained 2 layers with 64 filters each The convolutional layers were followed by a fully connected layer with 512 units and a 10-way softmax layer. We trained this model for 500 epochs on the training split using various populai methods for training deep learning models, as well as Eve. For each algorithm, we tried learning rates {10-2, 10-3, 10-4} (for algorithms with suggested default learning rates, we also included them in the search), learning rate decays {0, 10-2, 10-3, 10-4}, and picked the pair of values that led to the smallest final training loss. The loss curves are shown in Figure[1a] Eve quickly surpasses all othe methods and achieves the lowest final training loss. In the next section we will look at the behavior of the adaptive coefficient dt to gain some intuition behind this improved performance\nWe also trained a larger CNN model using the top-performing algorithms from the previous exper. iment. This model consisted of 3 blocks of 3x3 convolutional layers (3 layers per block, and 64 128, 256 filters per layer in the first, second, and third block respectively) each followed by 2x2 max. pooling and 0.5 dropout. Then we had 2 fully connected layers with 512 units, each followed by 0.5 dropout, and finally a 100-way softmax. We trained this model on the CIFAR100 (Krizhevsky &. Hinton2009) dataset for 100 epochs. We again performed a grid search over the same learning rate. and decay values as the last experiment. The results are shown in Figure[1b] and once again show that our proposed method improves over state of the art methods for training convolutional neural. networks.\n3 1.0 0.5 2 0.0 0 10 20 30 40 50 2.5 2.4 2.3 2.2 2.1 2.0 0 100 200 300 400 500 350 352 354 356 358 360 Epoch Epoch\n0.5 2 0.0 0 10 20 30 40 50 2.5 2.4 2.3 2.2 2.1 0 2.0 0 100 200 300 400 500 350 352 354 356 358 360 Epoch Epoch\n101 100 10-1 SSO 10-2 10-3 10-4 0 50000 100000 150000 200000 Iteration\nFigure 4: Loss curves and tuning coefficient dt for batch gradient descent training of a logistic regression model. For Eve, dt continuously decreases and converges to the lower threshold O.1..\nFigure 5: The left plot shows Adam with different learning rates plotted with Eve (learning rate 10-2) Although Adam with a larger learning rate can be almost identical to Eve, this is largely dependent on the initial values for Adam as shown in the plot on the right."}, {"section_index": "6", "section_name": "4.2 ANALYSIS OF TUNING COEFFICIENT", "section_text": "Before we consider the next set of experiments on recurrent neural networks, we will first look more closely at the behavior of the tuning coefficient dt in our algorithm. We will specifically consider the results from the CNN experiment on CIFAR10. Figure|2|shows the progress of dt throughout the training, and also in two smaller windows. A few things are worth noting here. First is that of the overall trend. There is an initial acceleration followed by a decay. This initial acceleration allows Eve to rapidly overtake other methods, and makes it proceed at a faster pace for about 100 epochs. This acceleration is not equivalent to simply starting with a larger learning rate - in all our experiments we search over a range of learning rate values. The overall trend for dt can be explained by looking a the minibatch losses at each iteration (as opposed to the loss computed over the entire dataset after each epoch) in Figure3] Initially, different minibatches achieve similar loss values which leads to dj decreasing. But as training proceeds, the variance in the minibatch losses increases and dt eventually increases. However, this overall trend does not capture the complete picture - for example, as showr in the bottom right plot of Figure[2] d can actually be decreasing in some regions of the training adjusting to local structures in the error surface.\nTo further study the observed acceleration, and to also motivate the need for clipping, we consider. a simpler experiment. We trained a logistic regression model on 1000 images from the MNIST. dataset. We used batch gradient descent for training i.e. all 1o00 samples were used for computing. the gradient at each step. We trained this model using Eve, Adam, Adamax, and SGD Nesterov\n2.2 100 Eve Adam Adamax 3.3 10-2 SGD Nesterov 1.0 SS 5.1 10-4 0.5 0.0 7.7 10-6 1.2 10-7 0 2000 4000 6000 8000 10000 Epoch\n10-2 101 Adam ( 6 10-2) Eve 100 Adam ( 7 10-2) Adam 10-3 Adam ( 8 10-2) 10-1 Adam ( 9 10-2) 10-4 10-2 Adam ( 10 10-2) 10-3 SSO 10-5 Eve 10-4 Adam ( 1 10 10-6 10-5 Adam ( 2 10-2) Adam ( 3 10-2) 10-6 10-7 Adam ( 4 10-2) 10-7 Adam ( 5 10-2) 10-8 10-8 0 5000 10000 15000 20000 0 5000 10000 15000 20000 Epoch Epoch\nFigure 6: Loss curves for experiments with recurrent neural networks. Eve consistently achieves better performance than other methods.\nfor 1oooo iterations, searching over a large range of values for the learning rate and decay: a E. {10-1, 10-2, 10-3, 10-4, 10-}, E {10-2, 10-3, 10-4, 10-5, 10-6, 0}. The results are shown in Figure4| Eve again outperforms all other methods and achieves the lowest training loss. Also, since. this is a smooth non-stochastic problem, the tuning coefficient d continuously decreases - this makes having a thresholding mechanism important since the learning rate would blow up otherwise..\nAlthough in the previous experiment the effect of our method is to increase the learning rate, it is not equivalent to simply starting with a larger learning rate. We will establish this with a couple simple experiments. First we note that in the previous experiment, the optimal decay rates for both Adam and Eve were 0 - no decay. The optimal learning rate for Eve was 10-2. Since the tuning coefficient converges to 0.1, we trained Adam using no decay, and learning rates i 10-2 where i varies from 1 to 10. The training loss curves are shown in the left plot of Figure[5] while increasing the learning rate with Adam does seem to close the gap with Eve, Eve does remain marginally faster. Moreover, and more importantly, this first plot represents the best-case situation for Adam. With larger learning rates, training becomes increasingly unstable and sensitive to the initial values of the parameters This is illustrated in the right plot of Figure|5|where we used Eve (with learning rate 10-2) and Adam (with learning rate 10-1) 10 times with different random initializations. In some cases, Adam fails to converge whereas Eve always converges - even though Eve eventually reaches a learning rate of 10-1. This is because very early in the training, the model is quite sensitive at higher learning rates due to larger gradients. Depending on the initial values, the algorithm may or may not converge. So it is advantageous to slowly accelerate as the learning stabilizes rather than start with a larger learning rate."}, {"section_index": "7", "section_name": "4.3 RECURRENT NEURAL NETWORKS", "section_text": "Finally, we evaluated our method on recurrent neural networks (RNNs). We first trained a RNN fo character-level language modeling on the Penn Treebank dataset (Marcus et al.||1993). Specifically the model consisted of a 2-layer character-level Gated Recurrent Unit (Chung et al. 2014) with hidden layers of size 256, 0.5 dropout between layers, and sequences of 100 characters. We adoptec 10-3 as the initial learning rate for Adam, Eve, and RMSProp. For Adamax, we used 2 10-3 as the learning rate since it is the suggested value. We used 3 10-4 for the learning rate decay. We\n1.8 100 2.5 100 Eve Eve Adam Adam Adamax Adamax 2.3 x 10-1 RMSprop 6.0 10-1 RMSprop 2.9 x 10-2 1.5 10 3.8 10-3 3.6 102 4.9 10 8.6 10 0 20 40 60 80 100 0 20 40 60 80 100 Epoch Epoch (a) bAbI, Q14 (b) bAbI, Q19 3.2 Eve 3.0 Adam RMSprop 2.8 Adamax 2.6 SSO 2.4 2.2 2.0 1.8 0 20 40 60 80 100 Epoch (c) Penn Treebank, Language modeling\ntrained this model for 100 epochs using each of the algorithms. The results, plotted in Figure 6c clearly show that our method achieves the best results. Eve optimizes the model to a lower final loss than the other models.\nWe trained another RNN-based model for the question & answering task. Specifically, we chose. two question types among 20 types from the bAbI dataset (Weston et al.|2015), Q19 and Q14. The dataset consists of pairs of supporting story sentences and a question. Different types of pairs are said. to require different reasoning schemes. For our test case, Q19 and Q14 correspond to Path Finding. and Time Reasoning respectively. We picked Q19 since it is reported to have the lowest baseline. score, and we picked Q14 randomly from the remaining questions. The model consisted of two parts.. one for encoding story sentences and another for query. Both included an embedding layer with 256. hidden units, and O.3 dropout. Next query word embeddings were fed into a GRU one token at a. time, to compute a sentence representation. Both story and query sequences were truncated to the maximum sequence length of 100. Finally, the sequence of word embeddings from story sentences. and the repeated encoded representation of a query were combined together to serve as input for. each time step in another GRU, with O.3 dropout. We searched for the learning rate and decay from. a range of values, E {10-2, 10-3, 10-4} and y E {10-2, 10-3, 10-4, 0}. The results, shown in Figures6a and6b|show that Eve again improves over all other methods."}, {"section_index": "8", "section_name": "5 CONCLUSION", "section_text": "For future work, we would look to theoretically analyze our method and its effects. While we have tried to evaluate our algorithm Eve on a variety of tasks, additional experiments on larger scale problems would further highlight the strength of our approach. We are making code for our method. and the experiments publicly available to encourage more research on this method.."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Alan J Bray and David S Dean. Statistics of critical points of gaussian fields on large-dimensional spaces. Physical review letters, 98(15):150201, 2007.\nJunyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of. gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555. 2014\nYann N Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In Advances in neural information processing systems, pp. 2933-2941, 2014.\nJohn Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and. stochastic optimization. Journal of Machine Learning Research. 12(Jul):2121-2159. 2011.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprini arXiv:1412.6980, 2014\nAlex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009\nMitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313-330, 1993\nWe proposed a simple and efficient method for incorporating feedback in to stochastic gradient descent algorithms. We used this method to create Eve, a modified version of the Adam algorithm Experiments with a variety of models showed that the proposed method can help improve the optimization of deep learning models.\nXavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Aistats, volume 9, pp. 249-256, 2010.\nKenji Kawaguchi. Deep learning without poor local minima. In Advances in Neural Information Processing Systems (NIPS), 2016. to appear.\nJason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merrienboer, Armand Joulin, and Tomas Mikolov. Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698, 2015.\nMatthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 2012.\nTijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURsERA: Neural Networks for Machine Learning. 4(2). 2012"}] |
SkuqA_cgx | [{"section_index": "0", "section_name": "AUTOMATED GENERATION OF MULTILINGUAL CLUSTERS FOR THE EVALUATION OF DISTRIBUTED REPRESENTATIONS", "section_text": "Philip Blair, Yuval Merhav & Joel Barry\nWe propose a language-agnostic way of automatically generating sets of semanti. cally similar clusters of entities along with sets of \"outlier\" elements, which may. hen be used to perform an intrinsic evaluation of word embeddings in the outliei. detection task. We used our methodology to create a gold-standard dataset, whicl. we call WikiSem500, and evaluated multiple state-of-the-art embeddings. The re. sults show a correlation between performance on this dataset and performance or sentiment analysis."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "High quality datasets for evaluating word and phrase representations are essential for building bette models that can advance natural language understanding. Various researchers have developed anc shared datasets for syntactic and semantic intrinsic evaluation. The majority of these datasets are based on word similarity (e.g.,Finkelstein et al.(2001); Bruni et al.(2012); [Hill et al.(2016)) anc analogy tasks (e.g., Mikolov et al.(2013a b)). While there has been a significant amount of work i1 this area which has resulted in a large number of publicly available datasets, many researchers hav recently identified problems with existing datasets and called for further research on better evalua tion methods (Faruqui et al.]2016} Gladkova et al.]2016] Hill et al.2016} Avraham & Goldberg 2016, Linzen2016 Batchkarov et al.2016). A significant problem with word similarity tasks is that human bias and subjectivity result in low inter-annotator agreement and, consequently, humar performance that is lower than automatic methods (Hill et al.] 2016). Another issue is low or nc correlation between intrinsic and extrinsic evaluation metrics (Chiu et al. 2016 Schnabel et al. 2015).\nRecently,Camacho-Collados & Navigli (2016) proposed the outlier detection task as an intrinsi evaluation method that improved upon some of the shortcomings of word similarity tasks. The tas builds upon the \"word intrusion' task initially described in|Chang et al.(2009): given a set of words the goal is to identify the word that does not belong in the set. However, like the vast majority c existing datasets, this dataset requires manual annotations that suffer from human subjectivity an bias, and it is not multilingual."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "nspired by Camacho-Collados & Navigli|(2016), we have created a new outlier detection dataset that can be used for intrinsic evaluation of semantic models. The main advantage of our ap-. oroach is that it is fully automated using Wikidata and Wikipedia, and it is also diverse in the umber of included topics, words and phrases, and languages. At a high-level, our approach is simple: we view Wikidata as a graph, where nodes are entities (e.g., (Chicago Bulls, Q128109),. (basketball team, Q13393265)), edges represent \"instance of' and \"subclass of' rela- tions (e.g., (Chicago Bulls, Q128109) is an instance of (basketball team, Q13393265) (basketballteam, Q13393265) is a subclass of (sports team, Q12973014), and the seman- tic similarity between two entities is inversely proportional to their graph distance (e.g.,. (Chicago Bulls, Q128109) and (Los Angeles Lakers, Q121783) are semantically similar since. they are both instance of (basketball team, Q1 33 932 65)). This way we can form semantic clusters\nWe release the first version of our dataset, which we call WikiSem500, to the research community It contains around 500 per-language cluster groups for English, Spanish, German, Chinese, and Japanese (a total of 13,314 test cases). While we have not studied yet the correlation between per- formance on this dataset and various downstream tasks, our results show correlation with sentimeni analysis. We hope that this diverse and multilingual dataset will help researchers to advance the state-of-the-art of word and phrase representations."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "Informally, we treat Wikidata entities which are instances of a common entity as a cluster (se Figure[1). Then, starting from that common entity (which we call a 'class'), we follow \"subclass of'. relationships to find a sibling class (see \"American Football Team' in Figure[1). Two items whicl. are instances of the sibling class (but not instances of the original class) are chosen as outliers. The. process is then repeated with a 'cousin' class with a common grandparent to the original class (se \"Ice Hockey Team' in Figure[1). Finally, we choose two additional outliers by randomly selecting. items which are a distance of at least 7 steps away from the original class. These three \"outlie. classes\"' are referred to as O1, O2, and O3 outlier classes, respectively..\nWord similarity tasks have been popular for evaluating distributional similarity models. The basic. idea is having annotators assigning similarity scores for word pairs. Models that can automatically. assign similarity scores to the same word pairs are evaluated by computing the correlation between. their and the human assigned scores.Schnabel et al.(2015) and Hill et al.(2016) review many. of these datasets.Hill et al.(2016) also argue that the predominant gold standards for semantic. evaluation in NLP do not measure the ability of models to reflect similarity. Their main argument. is that many such benchmarks measure association and relatedness and not necessarily similarity,. which limits their suitability for a wide range of applications. One of their motivating examples is. the word pair \"coffee\"' and \"cup,\" which have high similarity ratings in some benchmarks despite. not being very similar. Consequently, they developed guidelines that distinguish between association and similarity and used five hundred Amazon Mechanica1 Turk annotators to create a new dataset. called SimLex-999, which has higher inter annotator agreement than previous datasets.Avraham &. Goldberg(2016) improved this line of work further by redesigning the annotation task from rating. scales to ranking, in order to alleviate bias, and also redefined the evaluation measure to penalize. models more for making wrong predictions on reliable rankings than on unreliable ones..\nAnother popular task based on is word analogies. The analogy dataset proposed byMikolov et al.. (2013a) has become a standard evaluation set. The dataset contains fourteen categories, but only. about half of them are for semantic evaluation (e.g. \"US Cities\", \"Common Capitals\", \"All Capi-. tals'). In contrast, WikiSem5o0 contains hundreds of categories, making it a far more diverse and. challenging dataset for the general-purpose evaluation of word representations. The Mikolov dataset. has the advantage of additionally including syntactic categories, which we have left for future work\nCamacho-Collados & Navigli|(2016) addressed some of the issues mentioned previously by propos- ing the outlier detection task. Given a set of words, the goal is to identify the word that does not. belong in the set. Their pilot dataset consists of eight different topics each made up of a cluster of eight words and eight possible outliers. Four annotators were used for the creation of the dataset.. The main advantage of this dataset is its near perfect human performance. However, we believe a. major reason for that is the specific choice of clusters and the small size of the dataset..\nIn a similar format to the one used in the dataset furnished by Camacho-Collados & Navigli(2016). we generated sets of entities which were semantically similar to one another, known as a \"cluster\". followed by up to three pairs (as available) of dissimilar entities, or \"outliers\"', each with different. levels of semantic similarity to the cluster. The core thesis behind our design is that our knowl-. edge base, Wikidata|(2016), can be treated like a graph, where the semantic similarity between two elements is inversely proportional to their graph distance..\nFigure 1: Partial example of a Wikidata cluster. Solid arrows represent \"Instance Of'' relationships and dashed arrows represent \"Subclass Of' relationships.\nA full formalization of our approach is described in Appendix |A\nPrior to developing a framework to improve the quality of the generated dataset, we performed a small amount of manual pruning of our Wikidata graph. Disambiguation pages led to bizarre clus- ters of entities, for their associated relationships are not true semantic connections, but are instead artifacts of the structure of our knowledge base. As such, they were removed. Additionally, classes within a distance of three from the entity for \"Entity' itself'|(Q35120) had instances which had quite weak semantic similarity (one example being \"human'). We decided that entities at this depth range ought to be removed from the Wikidata graph as well.\nOnce our Wikidata dump was pruned, we employed a few extra steps at generation time to further. improve the quality of the dataset; first and foremost were how we chose representative instances and outliers for each class (see o, and , in Appendix|A). While \"San Antonio Spurs\" and \"Chicago. Bulls\" may both be instances of \"basketball team\", so are \"BC Andorra\"' and \"Olimpia Milano. We wanted the cluster entities to be as strongly related as possible, so we sought a class-agnostic. heuristic to accomplish this. Ultimately, we found that favoring entities whose associated Wikipedia. pages had higher sitelink counts gave us the desired effect..\nWe then noticed that many cluster entities had similarities in their labels that could be removed if a different label was chosen. For example, 80% of the entities chosen for \"association football club' ended with the phrase \"F.C\"' This essentially invalidates the cluster, for the high degree of syntactic overlap artificially increases the cosine similarity of all cluster items in word-level embeddings. Ir order to increase the quality of the surface forms chosen for each entity, we modified our resolution of entity QIDs to surface forms (see t in Appendix [A) to incorporate a varian([of the work from\n'Q35120 is effectively the \"root' node of the Wikidata graph; 95.5% of nodes have \"subclass of' chain vhich terminate at this node..\n2By 'variant' we are referring to the fact that the dictionaries in which we perform the probability. lookups are constructed for each language, as opposed to the cross-lingual dictionaries originally describec bySpitkovsky & Chang(2012)\nSports Club Football Ice Hockey Club Team Association American Football Football Club Anaheim Team Ducks Manchester New FC United England Barcelona Coritiba F.C. Patriots\nAs such, we created clusters by choosing the top eight instances of a given class, ranked by sitelink count. Additionally, we only chose items as outliers when they had at least ten sitelinks so as to remove those which were 'overly obscure,' for the ability of word embeddings to identify rare words (Schnabel et al.[2015) would artificially decrease the difficulty of such outliers"}, {"section_index": "4", "section_name": "Spitkovsky & Chang (2012)", "section_text": "r(QID) = arg max{P(s wikipedia page(QID))) S\nThat is, the string for an entity is the string which is most likely to link to the Wikipedia page. associated with that entity. For example, half of the inlinks to the page for Manchester United FC are the string \"Manchester United,' which is the colloquial way of referring to the team.\nNext, we filter out remaining clusters using a small set of heuristics. The following clusters are rejected:"}, {"section_index": "5", "section_name": "3.2 THE WIK1SEM5OO DATASET", "section_text": "Using the above heuristics and preprocessing, we have generated a dataset, which we call. WikiSem5od4 Our dataset is formatted as a series of files containing test groups, comprised of. a cluster and a series of outliers. Test cases can be constructed by taking each outlier in a given group with that group's cluster. Table[1shows the number of included test groups and test cases for. each language. Each group contains a cluster of 7-8 entities and up to two entities from each of the. three outlier classes. Table|2 shows example clusters taken from the dataset..\nFor clarity, we first restate the definitions of the scoring metrics defined by Camacho-Collados & Navigli|(2016) in terms of test groups (in contrast to the original definition, which is defined in terms of test cases). The way in which out-of-vocabulary entities are handled and scores are reported makes this distinction important, as will be seen in Section|4.3\n3 For Chinese and Japanese, this is modified such that the at least six entities must have identical (non-kana) first or last characters, or more than three must have identical the same first or last two characters. Because English is not inflected, we simply use spaces as approximate word boundaries and check that the first or last of those does not occur too often.\nClusters with more than two items are identical after having all digits removed. This han. dles cases such as entities only differing by years (e.g. \"January 2010, \"January 2012,. etc.). Clusters with more than three elements have identical first or last six characters Charac. ters are compared instead of words in order to better support inflected languages. This was. inspired by clusters for classes such as \"counties of Texas\" (Q117740 97), where even the dictionary-resolved aliases have high degrees of syntactic overlap (namely, over half of the. cluster items ended with the word \"County\"). Clusters in which any item has an occurrence of a 'stop affix,' such as the prefix \"Category:. or the suffix \" (a Japanese Wikipedia equivalent of \"List of'). In truth, this could be. done during preprocessing, but doing it at cluster generation time instead has no bearing or. the final results. These were originally all included under an additional stop class (\"Wiki. media page outside the main knowledge tree') at prune time, but miscategorizations in the. Wikidata hierarchy prevented us from doing so; for example, a now-removed link resultec. in every country being pruned from the dataset. As such, we opted to take a more conser. vative approach and perform this on at cluster-generation time and fine tune our stoplist as. needed. Clusters with more than one entity with a string length of one. This prevents clusters sucl. as \"letters of the alphabet' being created. Note that this heuristic was disabled for the. creation of Chinese and Japanese clusters. Clusters with too few entities, after duplicates introduced by resolving entities to surface forms (T) are removed.\nTable 1: Statistics of the WikiSem500 dataset\nTable 2: Example partial clusters from the WikiSem500 dataset. Classes, clusters, and outliers are shown.\nDonkey Kong Scrooge McDuck\nThe core measure during evaluation is known as the compactness score; given a set W of words, it is defined as follows:\n1 L Vw E W,c(w) = sim(Wi, Wj (Iw-1)(|w|-2) WiEW\\{w} wjEW\\{w} WjFWi\nwhere sim is a vector similarity measure (typically cosine similarity). Note that|Camacho-Collados & Navigli(2016) reduces the asymptotic complexity of c(w) from O(n3) to O(n2). We denote P(W, w) to be the (zero-indexed) position of w in the list of elements of W, sorted by compactness score in descending order. From this, we can describe the following definition for Outlier Position (OP), where (C, O) is a test group and o E O:\nOP(Cu{o})=|C OD(Cu{o}) otherwise\nLanguage Test Groups Test Cases English 500 2,816 Spanish 500 2,777 German 500 2,780 Japanese 448 2,492 Chinese 441 2,449\nmobile operating video game fictional country emotion system publisher Mordor Windows Phone Activision fear Rohan Firefox OS Nintendo love Cluster Items Shire iOS Valve Corporation happiness Arnor Android Electronic Arts anger Thule Periscope HarperCollins magnitude Duat Ingress Random House Gini coefficient Outliers Death Row redistribution of Donkey Kong iWeb Records wealth Scrooge McDuck iPhoto Sun Records Summa Theologica\nGini coefficient redistribution of wealth Summa Theologica\nDeath Row Records Sun Records\nOP(CU{o})=P(CU{o},o\nOP(CU{o}) (c,0)ed JOEO C OPP(D) = (C,0)ed|O|\n{c,0}ed oe0 OD(C u{o}) Accuracy(D) = (C,0)ed|O|\nOne thingCamacho-Collados & Navigli (2016) does not address is how out-of-vocabulary (OOV. items should be handled. Because our dataset is much larger and contains a wider variety of words we have extended their work to include additional scoring provisions which better encapsulate the. performance of vector sets trained on different corpora.\nThere are two approaches to handling out-of-vocabulary entities: use a sentinel vector to represent. all such entities or discard such entities entirely. The first approach is simpler, but it has a number. of drawbacks; for one, a poor choice of sentinel can have a drastic impact on results. For example. an implementation which uses the zero vector as a sentinel and defines sim(x, 0) = 0Vx places many non-out-of-vocabulary outliers at a large disadvantage in a number of vector spaces, for we have found that negative compactness scores are rare. The second approach avoids deliberately. introducing invalid data into the testing evaluation, but comparing scores across vector embeddings. with different vocabularies is difficult due to them having different in-vocabulary subsets of the test Set."}, {"section_index": "6", "section_name": "4.2 HUMAN BASELINE", "section_text": "In order to gauge how well embeddings should perform on our dataset, we conducted a human evaluation. We asked participants to select the outlier from a given test case, providing us with a human baseline for the accuracy score on the dataset. We computed the non-out-of-vocabulary intersection of the embeddings shown in Table [4] from which 60 test groups were sampled. Due to the wide array of domain knowledge needed to perform well on the dataset, participants were allowed to refer to Wikipedia (but explicitly told not to use Wikidata). We collected 447 responses, with an overall precision of 68.9%.\nThe performance found is not as high as on the baseline described in|Camacho-Collados & Navigl (2016), so we conducted a second human evaluation on a smaller hand-picked set of clusters ir order to determine whether a lack of domain knowledge or a systemic issue with our method was tc blame. We had 6 annotators fully annotate 15 clusters generated with our system. Each cluster hac. one outlier, with a third of the clusters having each of the three outlier classes. Human performance. was at 93%, with each annotator missing exactly one cluster. Five out of the six annotators misse. the same cluster, which was based on books and contained an Oj outlier (the most difficult class) We interviewed the annotators, and three of them cited a lack of clarity on Wikipedia over whethe or not the presented outlier was a book (leading them to guess), while the other two cited a conflatior with one of the book titles and a recently popular Broadway production..\nWith the exception of this cluster, the performance was near-perfect, with one annotator missing one cluster. Consequently, we believe that the lower human performance on our dataset is primarily a result of the dataset's broad domain"}, {"section_index": "7", "section_name": "4.3 EMBEDDING RESULTS", "section_text": "We evaluated our dataset on a number of publicly available vector embeddings: the Google News trained CBOw model released by Mikolov et al. (2013a), the 840-billion token Common Crawl corpus-trained GloVe model released by Pennington et al.(2014), and the English, Spanish, Ger- man, Japanese, and Chinese MultiCCA vectors[from Ammar et al. (2016), which are trained on a combination of the Europarl (Koehn 2005) and Leipzig (Quasthoff et al. 2006) corpora. In ad-\n5The vectors are word2vec CBOw vectors, and the non-English vectors are aligned to the English vector space. Reproducing the original (unaligned) non-English vectors yields near-identical results to the aligned vectors.\nWe have opted for the latter approach, computing the results on both the entire dataset and on only. the intersection of in-vocabulary entities between all evaluated vector embeddings. This allows us to compare embedding performance both when faced with the same unknown data and when evaluated. on the same, in-vocabulary data.\nTable 3: Performance of English word embeddings on the entire WikiSem500 dataset\nCommon Crawl Wikipedia+Gigaword\nGoogle News (phrases) Google News Leipzig+Europarl\nGoogle News Leipzig+Europarl Wikipedia+Gigaword\nThe bulk of the embeddings we evaluated were word embeddings (as opposed to phrase embed dings), so we needed to combine each embeddings' vectors in order to represent multi-word enti ties. If the embedding does handle phrases (only Google News), we perform a greedy lookup fo the longest matching subphrase in the embedding, averaging the subphrase vectors; otherwise, we take a simple average of the vectors for each token in the phrase. If a token is out-of-vocabulary it is ignored. If all tokens are out-of-vocabulary, the entity is discarded. This check happens as a preprocessing step in order to guarantee that a test case does not have its outlier thrown away. As such, we report the percentage of cluster entities filtered out for being out-of-vocabulary separatel from the outliers which are filtered out, for the latter results in an entire test case being discarded.\nIn order to compare how well each vector embedding would do when run on unknown input data we first collected the scores of each embedding on the entire dataset. Table 3|shows the Outlie Position Percentage (OPP) and accuracy scores of each embedding, along with the number of test groups which were skipped entirely|and the mean percentage of out-of-vocabulary cluster entities and outliers among all test groups As in|Camacho-Collados & Navigli(2016), we used cosine similarity for the sim measure in Equation2\nThe MultiCCA (Leipzig+Europarl) CBOw vectors have the highest rate of out-of-vocabulary enti. ties, likely due in large part to the fact that its vocabulary is an order of magnitude smaller than the. other embeddings (176,691, while the other embeddings had vocabulary sizes of over 1,000,000) Perhaps most surprising is the below-average performance of the Google News vectors. While. attempting to understand this phenomenon, we noticed that disabling the phrase vectors boosted performance; as such, we have reported the performance of the vectors with and without phrase vectors enabled.\nInspecting the vocabulary of the Google News vectors, we have inferred that the vocabulary has undergone some form of normalization; performing the normalizations which we can be reasonably certain were done before evaluating has a negligible impact (~ +0.01%) on the overall score. The Google News scores shown in Table|3|are with the normalization enabled. Ultimately, we hypoth esize that the discrepancy in Google News scores comes down to the training corpus. We observe a bias in performance on our training set towards Wikipedia-trained vectors (discussed below; see Table 5), and, additionally, we expect that the Google News corpus did not have the wide regiona\n% % Groups Cluster Model Corpus OPP Acc. Outliers Skipped Items OOV oOV Common Crawl 75.53 38.57 5 6.33 5.70 GloVe Wikipedia+Gigaword 79.86 50.61 2 4.25 4.02 Wikipedia+Gigaword 84.97 55.80 2 4.25 4.02 Google News (phrases) 63.10 22.60 6 13.68 15.02 CBOW Google News 65.13 24.45 6 13.68 15.02 Leipzig+Europarl 74.59 42.62 18 22.03 19.62 Skip-Gram Wikipedia+Gigaword 84.44 57.66 2 4.25 4.02\nTable 4: Performance of English word embeddings on their common in-vocabulary intersection of the WikiSem500 dataset.\nCommon Crawl Wikipedia+Gigaword\nWikipedia+Gigaword Google News (with phrases) Google News\nGoogle News. Leipzig+Europarl (MultiCCA) Wikipedia+Gigaword\nTable 5: Performance comparison of GloVe vectors trained on different corpora when evaluated or their common in-vocabulary intersection.\nCorpus OPP Acc. Wikipedia+Gigaword 80.03 54.43 Wikipedia 77.39 49.95 Gigaword 76.36 45.07\nIn order to get a better comparison between the embeddings under identical conditions, we then took the intersection of in-vocabulary entities across all embeddings and reevaluated on this subset 23.88% of cluster entities and 22.37% of outliers were out-of-vocabulary across all vectors, with 23 test groups removed from evaluation. Table|4 shows the results of this evaluation.\nThe scores appear to scale roughly linearly when compared to Table[3] but these results serve as a more reliable 'apples to apples' comparison of the algorithms and training corpora.\nBecause Wikidata was the source of the dataset, we analyzed how using Wikipedia as a training corpus influenced the evaluation results. We trained three GloVe models with smaller vocabular ies: one trained on only Gigaword, one trained on only Wikipedia, and one trained on both. The results of evaluating on the embeddings' common intersection are shown in Table[5] We observe a slight (~ 3.15% relative change) bias in OPP scores with Wikipedia over Gigaword, while finding a significantly larger (~ 19.12% relative change) bias in accuracy scores.\nAdditionally, we wanted to verify that the Oj outlier class (most similar) was the most difficult to distinguish from the cluster entities, followed by the O2 and O3 classes. We generated three sepa-. rate datasets, each with only one class of outliers, and evaluated each embedding on each dataset. Figure [2|illustrates a strong positive correlation between outlier class and both OPP scores and ac-. curacy.\nFinally, we used the non-English MultiCCA vectors (Ammar et al.]2016) to evaluate the multilin gual aspect of our dataset. We expect to see Spanish and German perform similarly to the English Europarl+Leipzig vectors. for the monolingual training corpora used to generate them consisted ol\nModel Corpus OPP Acc. Common Crawl 76.73 43.25 GloVe Wikipedia+Gigaword 76.19 47.69 Wikipedia+Gigaword 82.59 55.90 Google News (with phrases) 63.67 24.74 CBOW Google News 66.20 27.43 Leipzig+Europarl (MultiCCA) 75.01 42.82 Skip-Gram Wikipedia+Gigaword 82.03 56.80\nWe believe that this bias is acceptable, for OPP scores (which we believe to be more informative are not as sensitive to the bias and the numerous other factors involved in embedding generation (model, window size, etc.) can still be compared by controlling for the training corpora.\n70 90 seore 85 ddO 80 50 75 40 p = 0.77377 p = 0.76065 70 O1 O2 O3 O1 O2 03 Outlier Class Outlier Class (a) (b)\nFigure 2: OPP and accuracy scores of embeddings in Table3|by outlier class. The Spearman correlation coefficients are shown.\nTable 6: Performance of Non-English word embeddings on entire WikiSem500 dataset\nSpanish and German equivalents of the English training corpus. Table 6|shows the results of the non-English evaluations."}, {"section_index": "8", "section_name": "4.4 CORRELATION WITH DOWNSTREAM PERFORMANCE", "section_text": "In light of recent concerns raised about the correlation between intrinsic word embedding evaluation. and performance in downstream tasks, we sought to investigate the correlation between WikiSem50 performance and extrinsic evaluations. We used the embeddings from Schnabel et al.[(2015) and rai. the outlier detection task on them with our dataset..\nAs a baseline measurement of how well our dataset correlates with performance on alternative in. trinsic tasks, we our evaluation with the scores reported in Schnabel et al.[(2015) on the well-known analogy task (Mikolov et al.|2013a). Figure[3a|illustrates strong correlations between analogy task performance and our evaluation's OPP scores and accuracy..\nFigure 3b displays the Pearson's correlation between the performance of each embedding on the. WikiSem50o dataset and the extrinsic scores of each embedding on noun-phrase chunking and sen timent analysis reported in Schnabel et al.(2015)\nSimilar to the results seen in the paper, performance on our dataset correlates strongly with perfor-. mance on a semantic-based task (sentiment analysis), with Pearson's correlation coefficients higher than 0.97 for both accuracy and OPP scores. On the other hand, we observe a weak-to-nonexistent correlation with chunking. This is expected, however, for the dataset we have constructed consists of items which differ in semantic meaning; syntactic meaning is not captured by the dataset. It is. worth noting the inconsistency between this and the intrinsic results in Figure 3al which indicate a stronger correlation with the syntactic subset of the analogy task than its semantic subset. This is\nGroups % Cluster % Outliers Language OPP Acc. Vocab. Size Skipped Items OOV ooV Spanish 77.25 46.00 22 21.55 17.75 225,950 German 76.17 43.46 31 24.45 25.74 376,552 Japanese 72.51 40.18 54 36.87 24.66 70,551 Chinese 67.61 34.58 12 37.74 34.29 70,865\nWe observe a high degree of consistency with the results of the English vectors. The Japanese and Chinese scores are somewhat lower, but this is likely due to their having smaller training corpora and more limited vocabularies than their counterparts in other languages.\nAnalogy Analogy Sentiment Analogy OPP Score Accuracy Chunking OPP Score Accuracy (Syntactic) (Semantic) Analysis 0.015 12.5 Analogy 0.010 10.0 - Correlation: Chunking Correlation: Correlation: Correlation: 7.5 - 0.992 0.99 0.886 0.892 Correlation: Correlation: Correlation: 0.005 5.0 0.454 0.317 0.279 0.000 2.5 50 - 40 - (Analoie)) 0.0 30 - Correlation: 76- Correlation: Correlation: 20 - 0.963 0.907 0.92 Analysis Sentiment 10 - 72 Correlation: Correlation: 0- . 60- 68 - 0.985 0.976 (Anaogye) 40 - 64 - Correlation: Correlation: 0.843 0.842 20 - 80 OPP Score 0- 75- Correlation: 80- OPP 0.997 70 - . 75- Correlation: Score . 0.997 70 - 65- 65- 50- . 50- Accuracy Accuracy 40 - 40- . . . . . . . 30- 30- . . . 01020304050010203040500 20 40 6065 70 7580 30 50 93.90 93.95 94.00 94.05 64 68 72 7665 70 75 80 40 50 30 (a) (b)\nFigure 3: Pearson's correlation between WikiSem500 outlier detection performance and perfor mance on the analogy task and extrinsic tasks. Distributions of values are shown on the diagonal.\nexpected, for it agrees with the poor correlation between chunking and intrinsic performance shown in Schnabel et al.(2015)."}, {"section_index": "9", "section_name": "5 FUTURE E WORK", "section_text": "Due to the favorable results we have seen from the WikiSem500 dataset, we intend to release test groups in additional languages using the method described in this paper. Additionally, we plan to. study further the downstream correlation of performance on our dataset with additional downstream tasks.\nMoreover, while we find a substantial correlation between performance on our dataset and on a. semantically-based extrinsic task, the relationship between performance and syntactically-basec tasks leaves much to be desired. We believe that the approach taken in this paper to construc our dataset could be retrofitted to a system such as WordNet(2010) or Wiktionary(2016) (fo. multilingual data) in order to construct syntactically similar clusters of items in a similar man ner. We hypothesize that performance on such a dataset would correlate much more strongly with. syntactically-based extrinsic evaluations such as chunking and part of speech tagging.."}, {"section_index": "10", "section_name": "6 CONCLUSION", "section_text": "We have described a language-agnostic technique for generating a dataset consisting of semantically. related items by treating a knowledge base as a graph. In addition, we have used this approach to con-. struct the WikiSem5oo dataset, which we have released. We show that performance on this dataset correlates strongly with downstream performance on sentiment analysis. This method allows for creation of much larger scale datasets in a larger variety of languages without the time-intensive task of human creation. Moreover, the parallel between Wikidata's graph structure and the annotation. guidelines from[Camacho-Collados & Navigli|(2016) preserve the simple-to-understand structure of the original dataset."}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Waleed Ammar, George Mulcaire, Yulia Tsvetkov, Guillaume Lample, Chris Dyer, and Noah A Smith. Massively multilingual word embeddings. arXiv preprint arXiv:1602.01925, 2016.\nElia Bruni, Gemma Boleda. Marco Baroni. and Nam-Khanh Tran. Distributional semantics in tech nicolor. In Proceedings of the 5Oth Annual Meeting of the Association for Computational Lin guistics: Long Papers-Volume 1, pp. 136-145. Association for Computational Linguistics. 2012\nJose Camacho-Collados and Roberto Navigli. Find the word that does not belong: A framework for an intrinsic evaluation of word vector representations. In ACL Workshop on Evaluating Vector Space Representations for NLP, pp. 43-50. Association for Computational Linguistics, 2016.\nManaal Faruqui, Yulia Tsvetkov, Pushpendre Rastogi, and Chris Dyer. Problems with evaluation of word embeddings using word similarity tasks. arXiv preprint arXiv:1605.02276, 2016.\nLev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. Placing search in context: The concept revisited. In Proceedings of the 1Oth international conference on World Wide Web, pp. 406-414. ACM, 2001..\nAnna Gladkova, Aleksandr Drozd, and Computing Center. Intrinsic evaluations of word embed dings: What can we do better? ACL 2016, pp. 36, 2016.\nPhilipp Koehn. Europarl: A parallel corpus for statistical machine translation. In MT summi volume 5, pp. 79-86, 2005.\nTomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. Linguistic regularities in continuous space word representations. In HLT-NAACL, volume 13, pp. 746-751, 2013b.\nRobert Parker, David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. English gigaword fifth edition LDC2011T07, 2011. URL https://catalog.1dc.upenn.edu/LDc2011T07 [Online; accessed 28-October-2016].\nJeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for wor representation. In EMNLP, volume 14, pp. 1532-43, 2014.\nMiroslav Batchkarov, Thomas Kober, Jeremy Reffin, Julie Weeds, and David Weir. A critique of word similarity as a method for evaluating distributional semantic models. 2016\nFelix Hill. Roi Reichart, and Anna Korhonen. Simlex-999: Evaluating semantic models with (gen uine) similarity estimation. Computational Linguistics, 2016.\nWordNet. About wordnet, 2010. URLhttp: / /wordnet.princeton. edu [Online; accessed 28-October-20161."}, {"section_index": "12", "section_name": "A FORMALIZATION", "section_text": "We now provide a formal description of the approach taken to generate our datase\nFor convenience, we then denote C = {v E V I(v)] > 2}; the interpretation being that C is the set of entities which have enough instances to possibly be viable clusters. We now formally state the following definition:\nDefinition 1. A set A C V is a cluster if A = I(v) for some v E C. We additionally say that v i the class associated with the cluster A..\nLet P : V -> V* be the following 'parent of' mapping.\nThat is, I*(v) is the set of all instances of v and all instances of anything that is a subclass of v (recursively).\n9For the definition of O2, note that we do not say that it must be true that p E P2(v) \\ P(v). In practice,. however, avoiding (if not excluding) certain values of p in this manner can help improve the quality of resulting clusters, at the cost of reducing the number of clusters which can be produced.. 10The WikiSem500 dataset was generated with a value of = 7.\nWikimedia. Wikimedia downloads, Jul 2016. URLhttps : //dumps. wikimedia. org/ [On line; accessed 28-October-20161\nLet V be the set of entities in Wikidata. For all v1, V2 E V, we denote the relations v1<1v2 when V1 is an instance of v2, and v1sV2 when v1 is a subclass of v2. We then define I : V -> V* as the. following 'instances' mapping\nI(v)={vEVv<IV\nP(v)={v'EVv<sv}\nP-1(v)={v'EV|v'<sv\nk = 1 k> 1\nI*(v)=I(v) L v'EP-1(v)\nWe then define the measure d : V V -> N to be the graph distance between any entities in V 1sing the following set of edges:\nEsu ={(v1,V2)v1<sV2 V v2<sV1}\nO1(v) = *c \\I(v) pEP(v) cEP-1(p)\\{v}\nO2(v) = 1 pEP cEP-1(p)\\{v} O3(v) = {e E I(v') | d(p,v')} 1 pEP(v)\nTo simplify the model, we assume that all three of the above sets are mutually exclusive. Given these, we can formally state the following definition:.\nIntuitively, the three outlier classes denote different degrees of 'dissimilarity' from the original clus ter; Oi outliers are the most challenging to distinguish, for they are semantically quite similar to the cluster. O2 outliers are slightly easier to distinguish, and O3 outliers should be quite simple to pick. Out.\nThe final dataset (a set of (cluster, outliers) pairs) is then created by serializing the following\nD = T J<f(;[I(c)]),f.(,[O1(c)]U g,[O2(c)]U,[O3(c)]) I 1 CEC\nWhere o, and oo are functions which select up to a given number of elements from the given se of instances and outliers (respectively), and fp, f, and fo are functions which filter out datasel elements, instances, and outliers (respectively) based on any number of heuristics (see Section|3.1) Finally, t takes the resulting tuples and resolves their QIDs to the appropriate surface strings.\nThe benefit of stating the dataset in the above terms is that it is highly configurable. In particular different languages can be targeted by simply changing t to resolve Wikidata entities to their labels in that language.\nI*(c (14) pE P2(v) \\cEP-1(p)\\{v} {e E I(v') l d(p,v)} Iv (15) pEP(v)\nD = T f D ]f;;[Ic)D,fo,[O(c)]U ,[Oc)]U ,[O3c)D) CEC"}] |
Sys6GJqxl | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Several works have demonstrated that some adversarial examples generated for one model may also be misclassified by another model. Such a property is referred to as transferability, which can be leveraged to perform black-box attacks. This property has been exploited by constructing a substitute of the black-box model, and generating adversarial instances against the substitute to attack the black-box system (Papernot et al.(2016a b)). However, so far, transferability is mostly examined over small datasets, such as MNIST (LeCun et al.(1998)) and CIFAR-10 (Krizhevsky & Hinton (2009)). It has yet to be better understood transferability over large scale datasets, such as ImageNet (Russakovsky et al.(2015))\nIn this work, we are the first to conduct an extensive study of the transferability of different adver sarial instance generation strategies applied to different state-of-the-art models trained over a large scale dataset. In particular, we study two types of adversarial examples: (1) non-targeted adversar ial examples, which can be misclassified by a network, regardless of what the misclassified labels may be; and (2) targeted adversarial examples, which can be classified by a network as a targe label. We examine several existing approaches searching for adversarial examples based on a single model. While non-targeted adversarial examples are more likely to transfer, we observe few targeted adversarial examples that are able to transfer with their target labels."}, {"section_index": "1", "section_name": "DELVING INTO TRANSFERABLE ADVERSARIAL EX- AMPLES AND BLACK-BOX ATTACKS", "section_text": "Chang Liu, Dawn Song\nUniversity of the California, Berkeley\nAn intriguing property of deep neural networks is the existence of adversarial ex-. amples, which can transfer among different architectures. These transferable ad-. versarial examples may severely hinder deep neural network-based applications.. Previous works mostly study the transferability using small scale datasets. In this. work, we are the first to conduct an extensive study of the transferability over. large models and a large scale dataset, and we are also the first to study the trans-. ferability of targeted adversarial examples with their target labels. We study both. non-targeted and targeted adversarial examples, and show that while transferable non-targeted adversarial examples are easy to find, targeted adversarial examples. generated using existing approaches almost never transfer with their target labels.. Therefore, we propose novel ensemble-based approaches to generating transfer-. able adversarial examples. Using such approaches, we observe a large proportion. of targeted adversarial examples that are able to transfer with their target labels for. the first time. We also present some geometric studies to help understanding the. transferable adversarial examples. Finally, we show that the adversarial examples. generated using ensemble-based approaches can successfully attack Clarifai.com.. which is a black-box image classification system..\nRecent research has demonstrated that for a deep architecture, it is easy to generate adversarial examples, which are close to the original ones but are misclassified by the deep architecture (Szegedy et al.(2013); Goodfellow et al.[(2014)). The existence of such adversarial examples may have severe consequences, which hinders vision-understanding-based applications, such as autonomous driving. Most of these studies require explicit knowledge of the underlying models. It remains an open question how to efficiently find adversarial examples for a black-box model.\nWe further propose a novel strategy to generate transferable adversarial images using an ensemble of multiple models. In our evaluation, we observe that this new strategy can generate non-targeted adversarial instances with better transferability than other methods examined in this work. Also, fo. the first time, we observe a large proportion of targeted adversarial examples that are able to transfer with their target labels.\nWe study geometric properties of the models in our evaluation. In particular, we show that the gradient directions of different models are orthogonal to each other. We also show that decisior boundaries of different models align well with each other, which partially illustrates why adversaria examples can transfer.\nLast, we study whether generated adversarial images can attack Clarifai.com, a commercial com pany providing state-of-the-art image classification services. We have no knowledge about the train ing dataset and the types of models used by Clarifai.com; meanwhile, the label set of Clarifai.com is quite different from ImageNet's. We show that even in this case, both non-targeted and targetec adversarial images transfer to Clarifai.com. This is the first work documenting the success of gen erating both non-targeted and targeted adversarial examples for a black-box state-of-the-art online image classification system, whose model and training dataset are unknown to the attacker.\nIn the following, we first discuss related work, and then present the background knowledge anc. experiment setup in Section[2l Then we present each of our experiments and conclusions in the corresponding section as mentioned above.\nRelated work. Transferability of adversarial examples was first examined by Szegedy et al. (2013), which studied the transferability (1) between different models trained over the same dataset; and (2) between the same or different model trained over disjoint subsets of a dataset; However Szegedy et al.(2013) only studied MNIST.\nThe study of transferability was followed by Goodfellow et al.(2014), which attributed the phe. nomenon of transferability to the reason that the adversarial perturbation is highly aligned with the weight vector of the model. Again, this hypothesis was tested using MNIST and CIFAR-1O datasets. We show that this is not the case for models trained over ImageNet..\nPapernot et al.(2016a b) examined constructing a substitute model to attack a black-box target model. To train the substitute model, they developed a technique that synthesizes a training set and annotates it by querying the target model for labels. They demonstrate that using this approach,. black-box attacks are feasible towards machine learning services hosted by Amazon, Google, and. MetaMind. Further, Papernot et al.(2016a) studied the transferability between deep neural networks. and other models such as decision tree, kNN, etc..\nOur work differs from Papernot et al.(2016a b) in three aspects. First, in these works, only the mode and the training process are a black box, but the training set and the test set are controlled by the attacker; in contrast, we attack Clarifai.com, whose model, training data, training process, and ever the test label set are unknown to the attacker. Second. the datasets studied in these works are smal\nFor ImageNet models, we show that while existing approaches are effective to generate non-targeted transferable adversarial examples (Section 3), only few targeted adversarial examples generated by existing methods can transfer (Section4). We propose novel ensemble-based approaches to generate adversarial examples (Sec tion|5). Our approaches enable a large portion of targeted adversarial examples to transfer among multiple models for the first time. We are the first to present that targeted adversarial examples generated for models trained on ImageNet can transfer to a black-box system, i.e., Clarifai.com, whose model, training data, and label set is unknown to us (Section7). In particular, Clarifai.com's label set is very different from ImageNet's. We conduct the first analysis of geometric properties for large models trained over Ima geNet (Section|6), and the results reveal several interesting findings, such as the gradient directions of different models are orthogonal to each other.\nIn a concurrent and independent work, Moosavi-Dezfooli et al.(2016) showed the existence of a universal perturbation for each model, which can transfer across different images. They also show that the adversarial images generated using these universal perturbations can transfer across different models on ImageNet. However, they only examine the non-targeted transferability, while our work studies both non-targeted and targeted transferability over ImageNet.\nWe assume a classifier fe(x) outputs a category (or a label) as the prediction. Given an original image x, with ground truth label y, the adversarial deep learning problem is to seek for adversarial examples for the classifier fe(x). Specifically, we consider two classes of adversarial examples A non-targeted adversarial example x* is an instance that is close to x, in which case x* should have the same ground truth as x, while fe(x*) y. For the problem to be non-trivial, we assume fe(x) = y without loss of generality. A targeted adversarial example x* is close to x and satisfies fe(x*) = y*, where y* is a target label specified by the adversary, and y* y.\nIn this work, we consider three classes of approaches for generating adversarial examples. optimization-based approaches, fast gradient approaches, and fast gradient sign approaches. Each class has non-targeted and targeted versions respectively..\nfo(x*) # y d(x,x*) < B\nwhere d(., :) is a metric to quantify the distance between an original image and its adversarial cour terpart, and B, called distortion, is an upper bound placed on this distance. Without loss of gene. ality, we consider model f is composed of a network Je(x), which outputs the probability for eac category, so that f outputs the category with the highest probability.\nargmin.tA l(1u,Je(x*)\nwhere 1, is the one-hot encoding of the ground truth label y, l is a loss function to measure the distance between the prediction and the ground truth, and X is a constant to balance constraints (2. and (1), which is empirically determined. Here, loss function l is used to approximate constraint (1). and its choice can affect the effectiveness of searching for an adversarial example. In this work, we choose l(u, v) = log (1 - u : v), which is shown to be effective by|Carlini & Wagner(2016)..\nFast gradient sign (FGS). Goodfellow et al.(2014) proposed the fast gradient sign (FGS) method so that the gradient needs be computed only once to generate an adversarial example. FGS can be used to generate adversarial images to meet the Loo norm bound. Formally, non-targeted adversarial examples are constructed as\nx* < clip(x + Bsgn(Vxl(1y,Je(x)))\nscale, i.e., MNIST and GTSRB (Stallkamp et al.(2012)); in our work, we study the transferability over larger models and a larger dataset, i.e., ImageNet. Third, to attack black-box machine learning. systems, we do not query the systems for constructing the substitute model ourselves..\nFormally, given an image x with ground truth y = fe(x), searching for a non-targeted adversarial example can be modeled as searching for an instance x* to satisfy the following constraints:\nHere, clip(x) is used to clip each dimension of x to the range of pixel values, i.e., [0, 255] in this work. We make a slight variation to choose l(u, v) = log (1 - u : v), which is the same as used in the optimization-based approach.\nFast gradient (FG). The fast gradient approach (FG) is similar to FGS, but instead of moving along the gradient sign direction, FG moves along the gradient direction. In particular, we have\nWe call both FGS and FG fast gradient-based approaches\nVxl'(1y*,Jo(x)) x*clipxB *,Je(x))\nwhere l' is the same as the one used for the optimization-based approach.\nFor the rest of the paper, we focus on examining the transferability among state-of-the-art models trained over ImageNet (Russakovsky et al.(2015). In this section, we detail the models to be examined, the dataset to be evaluated, and the measurements to be used.\nMeasuring transferability. Given two models, we measure the non-targeted transferability by computing the percentage of the adversarial examples generated for one model that can be classifiec correctly for the other. We refer to this percentage as accuracy. A lower accuracy means better non-targeted transferability. We measure the targeted transferability by computing the percentage of the adversarial examples generated for one model that are classified as the target label by the other model. We refer to this percentage as matching rate. A higher matching rate means better targetec transferability. For clarity, the reported results are only based on top-1 accuracy. Top-5 accuracy's counterparts can be found in our online technical report:Liu et al.(2016).\nhttps://github.com/KaimingHe/deep-residual-networks. https://github.com/BvLc/caffe/tree/master/models/bvlc_googlen https://gist.github.com/ksimonyan/211839e770f7b538e2d8 https://github.com/sunblaze-ucb/transferability-advdnn-pub.\nx* clip(x + B Vxl(1y,Je(x))]\nModels.We examine five networks, ResNet-50, ResNet-101, ResNet-152_(He et al.(2015) GoogLeNet (Szegedy et al.(2014)2] and VGG-16 (Simonyan & Zisserman(2014)3 We retrieve the pre-trained models for each network online. The performance of these models on the ILSVRC 2012 (Russakovsky et al.(2015)) validation set can be found in our online technical report:Liu et al. (2016). We choose these models to study the transferability between homogeneous architectures (i.e., ResNet models) and heterogeneous architectures.\nDataset. It is less meaningful to examine the transferability of an adversarial image between two models which cannot classify the original image correctly. Therefore, from the ILSVRC 2012 val- idation set, we randomly choose 100 images, which can be classified correctly by all five models in our examination. These 100 images form our test set. To perform targeted attacks, we manually choose a target label for each image, so that its semantics is far from the ground truth. The images and target labels in our evaluation can be found on website\nDistortion. Besides transferability, another important factor is the distortion between adversarial images and the original ones. We measure the distortion by root mean square deviation, i.e., RMSD. which is computed as d(x*, x) = ,(x+ x)2/N, where x* and x are the vector representations of an adversarial image and the original one respectively, N is the dimensionality of x and x*, and x, denotes the pixel value of the i-th dimension of x, within range [0, 255], and similar for x*."}, {"section_index": "2", "section_name": "S NON-TARGETED ADVERSARIAL EXAMPLES", "section_text": "In this section, we examine different approaches for generating non-targeted adversarial images\nTo apply the optimization-based approach for a single model, we initialize x* to be x and use Adan Optimizer (Kingma & Ba (2014)) to optimize Objective (3) . We find that we can tune the RMSI by adjusting the learning rate of Adam and X. We find that, for each model, we can use a smal learning rate to generate adversarial images with small RMSD, i.e. < 2, with any X. In fact, we finc that when initializing x* with x, Adam Optimizer will search for an adversarial example around x even when we set to be 0, i.e., not restricting the distance between x* and x. Therefore, we se X to be 0 for all experiments using optimization-based approaches throughout the paper. Althoug these adversarial examples with small distortions can successfully fool the target model, howeve. they cannot transfer well to other models (details can be found in our online technical report: |Li et al.(2016)).\nNon-targeted adversarial examples transfer. We generate non-targeted adversarial examples or one network, but evaluate them on another, and Table|1|Panel A presents the results. From the table we can observe that"}, {"section_index": "3", "section_name": "3.2 FAST GRADIENT-BASED APPROACHES", "section_text": "We then examine the effectiveness of fast gradient-based approaches. A good property of fast. gradient-based approaches is that all generated adversarial examples lie in a 1-D subspace. There- fore, we can easily approximate the minimal distortion in this subspace of transferable adversarial examples between two models. In the following, we first control the RMSD to study fast gradient- based approaches' effectiveness. Second, we study the transferable minimal distortions of fast gradient-based approaches.\nWe increase the learning rate to allow the optimization algorithm to search for adversarial images with larger distortion. In particular, we set the learning rate to be 4. We run Adam Optimizer for 100 iterations to generate the adversarial images. We observe that the loss converges after 100 iterations An alternative optimization-based approach leading to similar results can be found in our online technical report:Liu et al.(2016).\nThe diagonal contains all O values. This says that all adversarial images generated for one model can mislead the same model. A large proportion of non-targeted adversarial images generated for one model using the optimization-based approach can transfer to another. . Although the three ResNet models share similar architectures which differ only in the hy. perparameters, adversarial examples generated against a ResNet model do not necessarily transfer to another ResNet model better than other non-ResNet models. For example, the adversarial examples generated for VGG-16 have lower accuracy on ResNet-50 than those generated for ResNet-152 or ResNet-101.\nSince the distortion B and the RMSD of the generated adversarial images are highly correlated, we can choose this hyperparameter B to generate adversarial images with a given RMSD. In Table|1\nPanel A: Optimization-based approach\nRMSD ResNet-152 ResNet-101 ResNet-50 VGG-16 GoogLeNet ResNet-152 23.45 4% 13% 13% 20% 12% ResNet-101 23.49 19% 4% 11% 23% 13% ResNet-50 23.49 25% 19% 5% 25% 14% 23.73 20% VGG-16 16% 15% 1% 7% GoogLeNet 23.45 25% 25% 17% 19% 1%\nPanel B: Fast gradient approach\nTable 1: Transferability of non-targeted adversarial images generated between pairs of models. The. first column indicates the average RMSD of all adversarial images generated for the model in the. corresponding row. The cell (i,j) indicates the accuracy of the adversarial images generated fo. model i (row) evaluated over model j (column). Results of top-5 accuracy can be found in ou online technical report:Liu et al.(2016).\nPanel B, we generate adversarial images using FG such that the average RMSD is almost the same as those generated using the optimization-based approach. We observe that the diagonal values in the table are all positive, which means that FG cannot fully mislead the models. A potential reason is that, FG can be viewed as approximating the optimization, but is tailored for speed over accuracy.\nWe also evaluate FGS, but the transferability of the generated images is worse than the ones gen-. erated using either FG or optimization-based approaches. The results can be found in our online technical report: Liu et al.(2016). It shows that when RMsD is around 23, the accuracies of the adversarial images generated by FGS is greater than their counterparts for FG. We hypothesize the. reason why transferability of FGS is worse to this fact..\nFor an image x and two models M1, M2, we can approximate the minimal distortion B along a direction o, such that x b = x + Bs generated for M is adversarial for both M and M2. Here is the direction, i.e., sgn(V,l) for FGS, and Vxl/l|Vxl| for FG.\nWe refer to the minimal transferable RMSD from M to M, using FG (or FGS) as the RMSD of a transferable adversarial example x with the minimal transferable distortion B from M to M. using FG (or FGS). The minimal transferable RMSD can illustrate the tradeoff between distortion. and transferability.\nIn the following, we approximate the minimal transferable RMSD through a linear search by sam pling B every O.1 step. We choose the linear-search method rather than binary-search method to determine the minimal transferable RMSD because the adversarial images generated from an origi nal image may come from multiple intervals. The experiment can be found in our online technical report:Liu et al.(2016).\nMinimal transferable RMSD using FG and FGS.Figure 1plots the cumulative distributior function (CDF) of the minimal transferable RMSD from VGG-16 to ResNet-152 using non-targetec FG (Figure[1a) and FGS (Figure[1b). From the figures, we observe that both FG and FGS can finc 100% transferable adversarial images with RMSD less than 80.91 and 86.56 respectively. Further\nRMSD ResNet-152 ResNet-101 ResNet-50 VGG-16 GoogLeNet ResNet-152 22.83 0% 13% 18% 19% 11% ResNet-101 23.81 19% 0% 21% 21% 12% ResNet-50 22.86 23% 20% 0% 21% 18% VGG-16 22.51 22% 17% 17% 0% 5% GoogLeNet 22.58 39% 38% 34% 19% 0%\nOn the other hand, the values of non-diagonal cells in the table, which correspond to the accuracies of adversarial images generated for one model but evaluated on another, are comparable with or less than their counterparts in the optimization-based approach. This shows that non-targeted adversarial examples generated by FG exhibit transferability as well.\n80 60 6.7 40 20 0 0 10\nFigure 1: The CDF of the minimal transferable RMSD from VGG-16 to ResNet-152 using FG (a and FGS (b). The green line labels the median minimal transferable RMSD, while the red line labe the minimal transferable RMSD to reach 90% percentage.\nRMSD ResNet-152 ResNet-101 ResNet-50 VGG-16 GoogLeNet ResNet-152 23.13 100% 2% 1% 1% 1% ResNet-101 23.16 3% 100% 3% 2% 1% ResNet-50 23.06 4% 2% 100% 1% 1% VGG-16 23.59 2% 1% 2% 100% 1% GoogLeNet 22.87 1% 1% 0% 1% 100%\nTable 2: The matching rate of targeted adversarial images generated using the optimization-based. approach. The first column indicates the average RMSD of the generated adversarial images. Cell. (i, j) indicates that matching rate of the targeted adversarial images generated for model i (row). when evaluated on model j (column). The top-5 results can be found in our online technical re port:Liu et al.(2016)."}, {"section_index": "4", "section_name": "3.3 COMPARISON WITH RANDOM PERTURBATIONS", "section_text": "We also evaluate the test accuracy when we add a Gaussian noise to the 100 images in our tesi set. The concrete results can be found in our online technical report: Liu et al.[(2016), where we show the conclusion that the \"transferability\"' of this approach is significantly worse than eithe optimization-based approaches or fast gradient-based approaches."}, {"section_index": "5", "section_name": "TARGETED ADVERSARIAL EXAMPLES", "section_text": "In this section, we examine the transferability of targeted adversarial images. Table 2|presents. the results for using optimization-based approach. We observe that (1) the prediction of targeted. adversarial images can match the target labels when evaluated on the same model that is used tc generate the adversarial examples; but (2) the targeted adversarial images can be rarely predicted as the target labels by a different model. We call the latter that the target labels do not transfer. Even when we increase the distortion, we still do not observe improvements on making target label transfer. Some results can be found in our online technical report: Liu et al.(2016). Even if we compute the matching rate based on top-5 accuracy, the highest matching rate is only 10%. The. results can be found in our online technical report:Liu et al.[(2016).\nWe also examine the targeted adversarial images generated by fast gradient-based approaches, anc we observe that the target labels do not transfer as well. The results can be found in our online technical report:Liu et al.(2016). In fact, most targeted adversarial images cannot mislead the model, for which the adversarial images are generated, to predict the target labels, regardless of how large the distortion is used. We attribute it to the fact that the fast gradient-based approaches only\n100 100 31.75 37.83 80 80 60 60 6.77 O 9.80 40 40 20 20 O 0 0 10 203040506070 80 90 0 1020304050607080 90 Minimal transferable RMsD Minimal transferable RMsD (a) Fast Gradient (b) Fast Gradient Sign\nthe FG method can generate transferable attacks with smaller RMSD than FGS. A potential rea son is that while FGS minimizes the distortion's Loo norm, FG minimizes its L2 norm, which is. proportional to RMSD.\nsearch for attacks in a 1-D subspace. In this subspace, the total possible predictions may contain a small subset of all labels, which usually does not contain the target label. In Section 6] we study decision boundaries regarding this issue.\nWe evaluate the effectiveness of the ensemble-based approach. For each of the five models, we treat it as the black-box model to attack. and generate adversarial images for the ensemble of the rest four, which is considered as white-box. We evaluate the generated adversarial images over all five models. Throughout the rest of the paper, we refer to the approaches evaluated in Section|3|and|4|as the approaches using a single model, and to the ensemble-based approaches discussed in this section as the approaches using an ensemble model.\nOptimization-based approach.We use Adam to optimize the objective (6) with equal ensemble weights across all models in the ensemble to generate targeted adversarial examples. In particular we set the learning rate of Adam to be 8 for each model. In each iteration, we compute the Adan update for each model, sum up the four updates, and add the aggregation onto the image. We run 100 iterations of updates, and we observe that the loss converges after 100 iterations. By doing so, for the first time, we observe a large proportion of the targeted adversarial images whose target labels car transfer. The results are presented in Table [3] We observe that not all targeted adversarial image. can be misclassified to the target labels by the models used in the ensemble. This suggests tha while searching for an adversarial example for the ensemble model, there is no direct supervision tc mislead any individual model in the ensemble to predict the target label. Further, from the diagona numbers of the table, we observe that the transferability to ResNet models is better than to VGG-16 or GoogLeNet, when adversarial examples are generated against all models except the target model\nWe also evaluate non-targeted adversarial images generated by the ensemble-based approach. We observe that the generated adversarial images have almost perfect transferability. We use the same procedure as for the targeted version, except the objective to generate the adversarial images. We evaluate the generated adversarial images over all models. The results are presented in Table 4 The generated adversarial images all have RMSDs around 17, which are lower than 22 to 23 of the optimization-based approach using a single model (See Table 1 for comparison). When the adversarial images are evaluated over models which are not used to generate the attack, the accuracy is no greater than 6%. For a reference, the corresponding accuracies for all approaches evaluated in Section3|using one single model are at least 12%. Our experiments demonstrate that the ensemble- based approaches can generate almost perfectly transferable adversarial images.\nFast gradient-based approach. The results for non-targeted fast gradient-based approaches ap plied to the ensemble can be found in our online technical report: Liu et al.(2016). We observe that the diagonal values are not zero, which is the same as we observed in the results for FG and\nWe also evaluate the matching rate of images added with Gaussian noise, as described in Section|3.3 However, we observe that the matching rate of any of the 5 models is 0%. Therefore, we conclude. that by adding Gaussian noise, the attacker cannot generate successful targeted adversarial examples. at all, let alone targeted transferability.\nWe hypothesize that if an adversarial image remains adversarial for multiple models, then it is more likely to transfer to other models as well. We develop techniques to generate adversarial images for multiple models. The basic idea is to generate adversarial images for the ensemble of the models. Formally, given k white-box models with softmax outputs being J1, ..., Jk, an original image x, and its ground truth y, the ensemble-based approach solves the following optimization problem (for targeted attack):\nk ((>QiJi(x*)) :1y*) + Xd(x,x*) gminr+ log i=1\nwhere y* is the target label specified by the adversary, aJ,(x*) is the ensemble model, and Q;. counterpart can be derived similarly. In doing so, we hope the generated adversarial images remain adversarial for an additional black-box model Jk+1.\nTable 3: The matching rate of targeted adversarial images generated using the optimization-based. approach. The first column indicates the average RMSD of the generated adversarial images. Cell (i, j) indicates that percentage of the targeted adversarial images generated for the ensemble of the. four models except model i (row) is predicted as the target label by model j (column). In each row, the minus sign \"_\" indicates that the model of the row is not used when generating the attacks Results of top-5 matching rate can be found in our online technical report: Liu et al. (2016)..\nRMSD ResNet-152 ResNet-101 ResNet-50 VGG-16 GoogLeNet -ResNet-152 17.17 0% 0% 0% 0% 0% -ResNet-101 17.25 0% 1% 0% 0% 0% -ResNet-50 17.25 0% 0% 2% 0% 0% -VGG-16 17.80 0% 0% 0% 6% 0% -GoogLeNet 17.41 0% 0% 0% 0% 5%\nTable 4: Accuracy of non-targeted adversarial images generated using the optimization-based ap. proach. The first column indicates the average RMsD of the generated adversarial images. Cell (i, j) corresponds to the accuracy of the attack generated using four models except model i (row) when evaluated over model j (column). In each row, the minus sign \"_\" indicates that the model of the row is not used when generating the attacks. Results of top-5 accuracy can be found in our. online technical report:Liu et al.(2016).\nFGS applied to a single model. We hypothesize a potential reason is that the gradient directions o. different models in the ensemble are orthogonal to each other, as we will illustrate in Section [6] Ir this case, the gradient direction of the ensemble is almost orthogonal to the one of each model in the ensemble. Therefore searching along this direction may require large distortion to reach adversaria. examples.\nIn this section, we show some geometric properties of the models to try to better understand transfer able adversarial examples. Prior works also try to understand the geometic properties of adversaria examples theoretically (Fawzi et al.[(2016)) or empirically (Goodfellow et al.(2014)). In this work we examine large models trained over a large dataset with 1000 labels, whose geometric propertie. are never examined before. This allows us to make new observations to better understand the model and their adversarial examples.\nThe gradient directions of different models in our evaluation are almost orthogonal to each other. We study whether the adversarial directions of different models align with each other. We calculate cosine value of the angle between gradient directions of different models, and the results can be found in our online technical report:Liu et al.(2016). We observe that all non-diagonal values are close to 0, which indicates that for most images, their gradient directions with respect to different models are orthogonal to each other.\nDecision boundaries of the non-targeted approaches using a single model. We study the deci sion boundary of different models to understand why adversarial examples transfer. We choose two\nRMSD ResNet-152 ResNet-101 ResNet-50 VGG-16 GoogLeNet -ResNet-152 30.68 38% 76% 70% 97% 76% -ResNet-101 30.76 75% 43% 69% 98% 73% -ResNet-50 30.26 84% 81% 46% 99% 77% -VGG-16 31.13 74% 78% 68% 24% 63% -GoogLeNet 29.70 90% 87% 83% 99% 11%\nFor targeted adversarial examples generated using FG and FGS based on an ensemble model, their transferability is no better than the ones generated using a single model. The results can be found in our online technical report:Liu et al.[(2016). We hypothesize the same reason to explain this: there are only few possible target labels in total in the 1-D subspace\nFigure 2: The example image to study the decision boundary. Its ID in ILSVRC 2012 validation se is 49443, and its ground truth label is \"anemone fish.\"'\nFigure 3: Decision regions of different models. We pick the same two directions for all plots: one is the gradient direction of VGG-16 (x-axis), and the other is a random orthogonal direction (y-axis) Each point in the span plane shows the predicted label of the image generated by adding a noise to. the original image (e.g., the origin corresponds to the predicted label of the original image). The. units of both axises are 1 pixel values. All sub-figure plots the regions on the span plane using the. same color for the same label. The image is in Figure[2.\nWe can observe that for all models, the region that each model can predict the image correctly is limited to the central area. Also, along the gradient direction, the classifiers are soon misled One interesting finding is that along this gradient direction, the first misclassified label for the three ResNet models (corresponding to the light green region) is the label \"orange\". A more detailed study can be found in our online technical report: Liu et al.(2016). When we look at the zoom out figures, however, the labels of images that are far away from the original one are different for different models, even among ResNet models.\nOn the other hand, in Table5l we show the total number of regions in each plane. In fact, for each plane, there are at most 21 different regions in all planes. Compared with the 1,000 total categories in ImageNet, this is only 2.1% of all categories. That means, for all other 97.9% labels, no targeted adversarial example exists in each plane. Such a phenomenon partially explains why fast gradient- based approaches can hardly find targeted adversarial images.\nFurther, in Figure4] we draw the decision boundaries of all models on the same plane as describec above. We can observe that.\nVGG-16 ResNet-50 ResNet-101 ResNet-152 GoogLeNet 20-15-10 10 15 20 -20-15-10 -5 0 10 15 20 20-15-10 10 15 20 20 -15-10 10 15 -15 -10 1015 20 0 15 15 15 15 15 10 10 10 10 5 5 5 0 O -5 -5 5 5 10 10 10 10 10 -15 15 -1 15 -15 20 -20 -20 20 50 0 50 10( 100100 50 50 10 100100 50 0 50 10( 50 0 50 10( 100100 50 0 50 100 100100 100100 2oo-unnz 50 50 50 50 50 50 50 50 50 50 100 100 100 100 100\nnormalized orthogonal directions d1, d2, one being the gradient direction of VGG-16 and the other being randomly chosen. Each point (u, v) in this 2-D plane corresponds to the image x + ud1 + vd2 where x is the pixel value vector of the original image. For each model, we plot the label of the image corresponding to each point, and get Figure|3|using the image in Figure|2\nTable 5: The number of all possible predicted labels for each model in the same plane described in Figure|3\n80 VGG-16 60 ResNet-101 ResNet-152 40 ResNet-50 20 GoogLeNet 0 -20 -40 -60 -50 0 50 100\nFigure 4: The decision boundary to sep arate the region within which all point are classified as the ground truth labe (encircled by each closed curve) fron others. The plane is the same one de scribed in Figure3 The origin o the coordinate plane corresponds to th original image. The units of both axise are 1 pixel values\nDecision boundaries of the targeted ensemble-based approaches. In addition, we choose the. targeted adversarial direction of the ensemble of all models except ResNet-101 and a random or thogonal direction, and we plot decision boundaries on the plane spanned by these two directior vectors in Figure[5] We observe that the regions of images, which are predicted as the target label.. align well for the four models in the ensemble. However, for the model not used to generate the adversarial image, i.e., ResNet-101, it also has a non-empty region such that the prediction is suc. cessfully misled to the target label, although the area is much smaller. Meanwhile, the region within each closed curve of the models almost has the same center..\n60 ResNet-101 VGG-16 40 ResNet-50 ResNet-152 20 GoogLeNet 0 6 I -20 11 -40 -60 -50 0 50 100\nFigure 5: The decision boundary to separate the. region within which all points are classified as the target labe1 (encircled by each closed curve) from. others. The plane is spanned by the targeted ad-. versarial direction and a random orthogonal di. rection. The targeted adversarial direction is com- puted as the difference between the original image. in Figure[2|and the adversarial image generated by the optimization-based approach for an ensemble.. The ensemble contains all models except ResNet 101. The origin of the coordinate plane corre-. sponds to the original image. The units of both. axises are 1 pixel values..\nThe boundaries align with each other very well. This partially explains why non-targete adversarial images can transfer among models. The boundary diameters along the gradient direction is less than the ones along the ran dom direction. A potential reason is that moving a variable along its gradient directio can change the loss function (i.e., the probability of the ground truth label) significantly Therefore along the gradient direction it will take fewer steps to move out of the groun truth region than a random direction. An interesting finding is that even though we move left along the x-axis, which is equivalen to maximizing the ground truth's prediction probability, it also reaches the boundary muc sooner than moving along a random direction. We attribute this to the non-linearity of th loss function: when the distortion is larger, the gradient direction also changes dramaticall In this case, moving along the original gradient direction no longer increases the probabilit to predict the ground truth label (details can be found in our online technical report:Li et al. (2016)). . As for VGG-16 model, there is a small hole within the region corresponding to the groun truth. This may partially explain why non-targeted adversarial images with small distortio exist, but do not transfer well. This hole does not exist in other models' decision planes. I this case, non-targeted adversarial images in this hole do not transfer."}, {"section_index": "6", "section_name": "REAL WORLD EXAMPLE: ADVERSARIAL EXAMPLES FOR CLARIFAI.COM", "section_text": "Clarifai.com is a commercial company providing state-of-the-art image classification services. We. have no knowledge about the dataset and types of models used behind Clarifai.com, except that we. have black-box access to the services. The labels returned from Clarifai.com are also different fron the categories in ILSVRC 2012. We submit all 100 original images to Clarifai.com and the returnec. labels are correct based on a subjective measure..\nWe also submit 400 adversarial images in total, where 200 of them are targeted adversarial examples and the rest 200 are non-targeted ones. As for the 200 targeted adversarial images, 100 of them are generated using the optimization-based approach based on VGG-16 (the same ones evaluated in Table 2), and the rest 100 are generated using the optimization-based approach based on an ensemble of all models except ResNet-152 (the same ones evaluated in Table 3). The 200 non- targeted adversarial examples are generated similarly (the same ones evaluated in Table[1and4)\nFor non-targeted adversarial examples, we observe that for both the ones generated using VGG-1 and those generated using the ensemble, most of them can transfer to Clarifai.com.\nMore importantly, a large proportion of our targeted adversarial examples are misclassified by Clari fai.com as well. We observe that 57% of the targeted adversarial examples generated using VGG-16 and 76% of the ones generated using the ensemble can mislead Clarifai.com to predict labels irrele vant to the ground truth.\nFurther, our experiment shows that for targeted adversarial examples, 18% of those generated us- ing the ensemble model can be predicted as labels close to the target label by Clarifai.com. The corresponding number for the targeted adversarial examples generated using VGG-16 is 2%. Con- sidering that in the case of attacking Clarifai.com, the labels given by the target model are different from those given by our models, it is fairly surprising to see that when using the ensemble-based approach, there is still a considerable proportion of our targeted adversarial examples that can mis- lead this black-box model to make predictions semantically similar to our target labels. All these numbers are computed based on a subjective measure, and we include some examples in Table |6 More examples can be found in our online technical report:Liu et al.[(2016).\nfour, sledge, sled, enjoyment\nClarifai.com targeted Clarifai.com results original true target results of. adversarial of targeted image label label original image. example adversarial example bridge, window, sight, wall, window viaduct arch, old, screen river, decoration, sky design fruit, Buddha, hip, rose fall, gold, stupa, hip, food, temple, tope rosehip little, celebration, wildlife artistic dogsled, cherry, group together,. dog four, hip, rose branch, sled, sledge, hip, fruit, dog sled, rosehip food, sleigh enjoyment season"}, {"section_index": "7", "section_name": "8 CONCLUSION", "section_text": "In this work, we are the first to conduct an extensive study of the transferability of both non-targetec. and targeted adversarial examples generated using different approaches over large models and a. large scale dataset. Our results confirm that the transferability for non-targeted adversarial exam ples are prominent even for large models and a large scale dataset. On the other hand, we find tha. it is hard to use existing approaches to generate targeted adversarial examples whose target labels. can transfer. We develop novel ensemble-based approaches, and demonstrate that they can gen. erate transferable targeted adversarial examples with a high success rate. Meanwhile, these new. approaches exhibit better performance on generating non-targeted transferable adversarial examples. than previous work. We also show that both non-targeted and targeted adversarial examples gen. erated using our new approaches can successfully attack Clarifai.com, which is a black-box image. classification system. Furthermore, we study some geometric properties to better understand the. transferable adversarial examples."}, {"section_index": "8", "section_name": "ACKNOWLEDGMENTS", "section_text": "This material is in part based upon work supported by the National Science Foundation under Grant. No. TWC-1409915. Any opinions, findings, and conclusions or recommendations expressed in this. material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "pug, sea seal, friendship, ocean, pug, adorable, sea lion head, pug-dog purebred, sea, sit cute Old poodle, veil, English retriever, spirituality, sheep- loyalty, abaya religion, dog, sit, people, bobtail two illustration transportation beach, amphib- system, woman, ian, maillot, vehicle, adult, amphibi- tank suit man, wear, ous print, portrait vehicle retro patas, ornithology, primate, hussar monkey, avian, monkey, safari, bee eater beak, Erythro- sit, wing, cebus looking feather patas\nTable 6: Original images and adversarial images evaluated over Clarifai.com. For labels returned. from Clarifai.com, we sort the labels firstly by rareness: how many times a label appears in the. Clarifai.com results for all adversarial images and original images, and secondly by confidence Only top 5 labels are provided..\nAlhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, and Pascal Frossard. Robustness of classifiers from adversarial to random noise. In Advances in Neural Information Processing Systems, pp. 1624-1632, 2016.\nAlex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009\nSeyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. Universa adversarial perturbations. arXiv preprint arXiv:1610.08401. 2016\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014. URLhttp://arxiv.0rg/abs/1409.1556\nan J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversaria examples. arXiv preprint arXiv:1412.6572, 2014.\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998\nO1Oa Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram. Swami. Practical black-box attacks against deep learning systems using adversarial examples.. arXiv preprint arXiv:1602.02697, 2016b. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng. Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei.. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211-252, 2015. doi: 10.1007/s11263-015-0816-y."}] |
Hk8N3Sclg | [{"section_index": "0", "section_name": "MULTI-AGENT COOPERATION AND THE EMERGENCE OF (NATURAL) LANGUAGE", "section_text": "1Google DeepMind, 2Facebook AI Research, 3University of Trento angeliki@google.com,{alexpeys,mbaroni}@fb.com\nThe current mainstream approach to train natural language systems is to expose them to large amounts of text. This passive learning is problematic if we are in terested in developing interactive machines, such as conversational agents. We propose a framework for language learning that relies on multi-agent communi cation. We study this learning in the context of referential games. In these games a sender and a receiver see a pair of images. The sender is told one of them is the target and is allowed to send a message from a fixed, arbitary vocabulary tc the receiver. The receiver must rely on this message to identify the target. Thus the agents develop their own language interactively out of the need to communi cate. We show that two networks with simple configurations are able to learn tc coordinate in the referential game. We further explore how to make changes to the game environment to cause the \"word meanings'' induced in the game to better re flect intuitive semantic properties of the images. In addition, we present a simple strategy for grounding the agents' code into natural language. Both of these are necessary steps towards developing machines that are able to communicate witl humans productively."}, {"section_index": "1", "section_name": "INTRODUCTION", "section_text": "I tried to break it to him gently [...] the only way to learn an unknown language is to interact with a native speaker [...] asking questions, holding a conversation that sort of thing [...] If you want to learn the aliens' language, someone [...] will have to talk with an alien. Recordings alone aren't sufficient. Ted Chiang. Story of Your Life\nOne of the main aims of AI is to develop agents that can cooperate with others to achieve goals (Wooldridge 2009). Such coordination requires communication. If the coordination partners are tc include humans, the most obvious channel of communication is natural language. Thus, handling natural-language-based communication is a key step toward the development of AI that can thrive in a world populated by other agents.\nGiven the success of deep learning models in related domains such as image captioning or machin. translation (e.g.,Sutskever et al.]2014] Xu et al.]2015), it would seem reasonable to cast the prob lem of training conversational agents as an instance of supervised learning (Vinyals & Le] 2015) However, training on \"canned' conversations does not allow learners to experience the interactive. aspects of communication. Supervised approaches, which focus on the structure of language, are ar. excellent way to learn general statistical associations between sequences of symbols. However, they do not capture the functional aspects of communication, i.e., that humans use words to coordinate with others and make things happen (Austin1962). [Clark1996] Wittgenstein1953).\nThis paper introduces the first steps of a research program based on multi-agent coordination com munication games. These games place agents in simple environments where they need to develop a language to coordinate and earn payoffs. Importantly, the agents start as blank slates, but, by play ing a game together, they can develop and bootstrap knowledge on top of each others, leading to the emergence of a language.\n* Work done while at Facebook AI Research"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "The central problem of our program, then, is the following: How do we design environments that foster the development of a language that is portable to new situations and to new communication partners (in particular humans)?\nOther researchers have proposed communication-based environments for the development o coordination-capable AI. Work in multi-agent systems has focused on the design of pre-programmec communication systems to solve specific tasks (e.g., robot soccer, Stone & Veloso 1998). Most re lated to our work, Sukhbaatar et al.(2016) and Foerster et al.(2016) show that neural networks car evolve communication in the context of games without a pre-coded protocol. We pursue the same question, but further ask how we can change our environment to make the emergent language more interpretable.\nOthers (e.g., the SHRLDU program of|Winograd1971 or the game in Wang et al.2016) propose building a communicating AI by putting humans in the loop from the very beginning. This approach has benefits but faces serious scalability issues, as active human intervention is required at each step. An attractive component of our game-based paradigm is that humans may be added as players, but do not need to be there all the time.\nA third branch of research focuses on \"Wizard-of-Oz\"' environments, where agents learn to play. games by interacting with a complex scripted environment (Mikolov et al.f 2015). This approach gives the designer tight control over the learning curriculum, but imposes a heavy engineering burden on developers. We also stress the importance of the environment (game setup), but we focus on simpler environments with multiple agents that force them to get smarter by bootstrapping on top of. each other.\nWe leverage ideas from work in linguistics, cognitive science and game theory on the emergence of language (Wagner et al.) 2003] Skyrms2010f|Crawford & Sobel|1982 Crawford1998). Our game is a variation of Lewis' signaling game (Lewis1969). There is a rich tradition of linguistic and cognitive studies using similar setups (e.g.,Briscoe2002} Cangelosi & Parisi2002 Spike et al. 2016, Steels & Loetzsch 2012). What distinguishes us from this literature is our aim to, eventually develop practical AI. This motivates our focus on more realistic input data (a large collection of noisy natural images) and on trying to align the agents' language with human intuitions.\nLewis' classic games have been studied extensively in game theory under the name of \"cheap talk' These games have been used as models to study the evolution of language both theoretically an experimentally (Crawford1998]Blume et al.|1998] Crawford & Sobel|1982). A major questioi in game theory is whether equilibrium actually occurs in a game as convergence in learning i not guaranteed (Fudenberg & Peysakhovich]2014]Roth & Erev1995).And, if an equilibriun is reached, which one it will be (since they are typically not unique). This is particularly true fo cheap talk games, which exhibit Nash equilibria in which precise language emerges, others wher vague language emerges and others where no language emerges at all (Crawford & Sobel1982). Ii addition, because in these games language has no ex-ante meaning and only emerges in the contex of the equilibrium, some of the emergent languages may not be very natural. Our results speak t both the convergence question and the question of what features of the game cause the appearanc of different types of languages. Thus, our results are also of interest to game theorists.\nAn evolutionary perspective has recently been advocated as a way to mitigate the data hunger of. traditional supervised approaches (Goodfellow et al.2014] Silver et al.|2016). This research con- firms that learning can be bootstrapped from competition between agents. We focus, however, on. cooperation between agents as a way to foster learning while reducing the need for annotated data.\nWe start from the most basic challenge of using a language in order to refer to things in the context of a two-agent game. We focus on two questions. First, whether tabula rasa agents succeed in com- munication. Second, what features of the environment lead to the development of codes resembling human language.\nWe assess this latter question in two ways. First, we consider whether the agents associate general conceptual properties, such as broad object categories (as opposed to low-level visual properties), to the symbols they learn to use. Second, we examine whether the agents' \"word usage\" is partially interpretable by humans in an online experiment.\nOur general framework includes K players, each parametrized by Ok, a collection of tasks/games tha. the players have to perform, a communication protocol V that enables the players to communicate with each other, and payoffs assigned to the players as a deterministic function of a well-defined. goal. In this paper we focus on a particular version of this: referential games. These games are. structured as follows.\nMany extensions to the basic referential game explored here are possible. There can be more images or a more sophisticated communication protocol (e.g., communication of a sequence of symbols o. multi-step communication requiring back-and-forth interaction'), rotation of the sender and receivei roles, having a human occasionally playing one of the roles, etc..\nImagesWe use the McRae et al.'s (2005) set of 463 base-level concrete concepts (e.g., cat, ap-. ple, car...) spanning across 20 general categories (e.g., animal, fruit/vegetable, vehicle...). We randomly sample 100 images of each concept from ImageNet (Deng et al.]2009). To create tar- get/distractor pairs, we randomly sample two concepts, one image for each concept and whether the. first or second image will serve as target. We apply to each image a forward-pass through the pre. trained VGG ConvNet (Simonyan & Zisserman2014), and represent it with the activations from. either the top 1000-D softmax layer (sm) or the second-to-last 4096-D fully connected layer (fc).\nAgent Players Both sender and receiver are simple feed-forward networks. For the sender, we experiment with the two architectures depicted in Figure[1 Both sender architectures take as input the target (marked with a green square in Figure [1) and distractor representations, always in this order, so that they are implicitly informed of which image is the target (the receiver, instead, sees the two images in random order).\nThe agnostic sender is a generic neural network that maps the original image vectors onto a \"game. specific' embedding space (in the sense that the embedding is learned while playing the game. followed by a sigmoid nonlinearity. Fully-connected weights are applied to the embedding concate nation to produce scores over vocabulary symbols.\nThe informed sender also first embeds the images into a \"game-specific\"' space. It then applies. 1-D convolutions (\"filters') on the image embeddings by treating them as different channels. The. informed sender uses convolutions with kernel size 2x1 applied dimension-by-dimension to the two image embeddings (in Figure 1] there are 4 such filters). This is followed by the sigmoid. nonlinearity. The resulting feature maps are combined through another filter (kernel size fx1, where. f is the number of filters on the image embeddings), to produce scores for the vocabulary symbols. Intuitively, the informed sender has an inductive bias towards combining the two images dimension. by-dimension whereas the agnostic sender does not (though we note the agnostic architecture nests. the informed one).\n'For example,Jorge et al.(2016) explore agents playing a \"Guess Who\"' game to learn about the emergence of question-asking and answering in language\n1. There is a set of images represented by vectors {1,..., in}, two images are drawn a. random from this set, call them (iL, iR), one of them is chosen to be the \"target' t E {L, R 2. There are two players, a sender and a receiver, each seeing the images - the sender receive input 0s(iL,iR,t) 3. There is a vocabulary V of size K and the sender chooses one symbol to send to th receiver, we call this the sender's policy s(0s(iL, iR,t)) E V. 4. The receiver does not know the target, but sees the sender's symbol and tries to guess th target image. We call this the receiver's policy r(iL, iR, s(0s(iL, iR, t))) E {L, R}. 5. If r(iL,iR, s(0s(iL, iR,t)) = t, that is, if the receiver guesses the target, both player.. receive a payoff of 1 (win), otherwise they receive a payoff of O (lose)..\nSYSY aanosti sendet informed cende\nagnostic sender\nThe receiver takes as input the target and distractor image vectors in random order, as well as the. symbol produced by the sender (as a one-hot vector over the vocabulary). It embeds the images and. the symbol into its own \"game-specific'' space. It then computes dot products between the symbol and image embeddings. Ideally, dot similarity should be higher for the image that is better denoted. by the symbol. The two dot products are converted to a Gibbs distribution (with temperature +) and. the receiver \"points\"' to an image by sampling from the resulting distribution..\nGeneral Training Details We set the following hyperparameters without tuning: embedding di mensionality: 50, number of filters applied to embeddings by informed sender: 20, temperature of Gibbs distributions: 10. We explore two vocabulary sizes: 10 and 100 symbols.\nThis setup is naturally modeled with Reinforcement Learning (Sutton & Barto1998). As out-. lined in Section 2] the sender follows policy s(0s(iL,iR,t)) E V and the receiver policy. r(iL, iR, s(0s(iL,iR,t))) E {L, R}. The loss function that the two agents must minimize is. E[R(r)] where R is the reward function returning 1 iff r(iL, iR, s(0s(iL, iR,t)) = t. Param-. eters are updated through the Reinforce rule (Williams1992). We apply mini-batch updates, with a batch size of 32 and for a total of 50k iterations (games). At test time, we compile a set of 10k games using the same method as for the training games."}, {"section_index": "3", "section_name": "LEARNING TO COMMUNICATE", "section_text": "Our first question is whether agents converge to successful communication at all. We see that they do: agents almost perfectly coordinate in the 1k rounds following the 10k training games for every architecture and parameter choice (Table1)\nWe see, though, some differences between different sender architectures. Figure 2(left) shows performance on a sample of the test set as a function of the first 5,o00 rounds of training. The agents\nFigure 1: Architectures of agent players\nFor both senders, motivated by the discrete nature of language, we enforce a strong communication. bottleneck that discretizes the communication protocol. Activations on the top (vocabulary) layer are converted to a Gibbs distribution (with temperature parameter t), and then a single symbol s is. sampled from the resulting probability distribution..\nThe sender and receiver parameters 0 = (0, 0s) are learned while playing the game. No weights. are shared and the only supervision used is communication success, i.e., whether the receiver pointed at the right referent.\nWe now turn to our main questions. The first is whether the agents can learn to successfully coordi- nate in a reasonable amount of time. The second is whether the agents' language can be thought of as \"natural language\", i.e., symbols are assigned to meanings that make intuitive sense in terms of our conceptualization of the world..\n1.0 S 0.09 0.9 U Un s0.8 uo! S 0.06 !onnnnnmmni 0.7 agnostic-sender (100 symbols) 0.6 agnostic-sender (10 symbols) 0.03 informed-sender (100 symbols) 0.5 informed-sender (10 symbols) 0.40 0.00- 1k 2k 3k 4 k 5 k 2 1015202538 100 #Games Singular Value Position\nFigure 2: Left: Communication success as a function of training iterations, we see that informed senders converge faster than agnostic ones. Right: Spectrum of an example symbol usage matrix: the first few dimensions do capture only partial variance, suggesting that the usage of more symbols by the informed sender is not just due to synonymy.\nThe informed sender makes use of more symbols from the available vocabulary, while the agnostic sender constantly uses a compact 2-symbol vocabulary. This suggests that the informed sender is using more varied and word-like symbols (recall that the images depict 463 distinct objects, so we would expect a natural-language-endowed sender to use a wider array of symbols to discriminat among them). However, it could also be the case that the informed sender vocabulary simply con tains higher redundancy/synonymy. To check this, we construct a (sampled) matrix where rows are game image pairs, columns are symbols, and entries represent how often that symbol is used for tha pair. We then decompose the matrix through SVD. If the sender is indeed just using a strategy witl few effective symbols but high synonymy, then we should expect a 1- or 2-dimensional decomposi tion. Figure|2|(right) plots the normalized spectrum of this matrix. While there is some redundancy in the matrix (thus potentially implying there is synonymy in the usage), the language still requires multiple dimensions to summarize (cross-validated SVD suggests 50 dimensions).\nWe now turn to investigating the semantic properties of the emergent communication protocol. Re. call that the vocabulary that agents use is arbitrary and has no initial meaning. One way to understand its emerging semantics is by looking at the relationship between symbols and the sets of images they. refer to.\nid sender vis voc used comm purity (%) obs-chance rep size symbols success (%) purity (%) 1 informed sm 100 58 100 46 27 2 informed fc 100 38 100 41 23 3 informed sm 10 10 100 35 18 4 informed fc 10 10 100 32 17 5 agnostic sm 100 2 99 21 15 6 agnostic fc 10 2 99 21 15 7 agnostic sm 10 2 99 20 15 8 agnostic fc 100 2 99 19 15\nTable 1: Playing the referential game: test results after 50K training games. Used symbols column reports number of distinct vocabulary symbols that were produced at least once in the test phase. See text for explanation of comm success and purity. All purity values are highly significant (p < O.001) compared to simulated chance symbol assignment when matching observed symbol usage. The obs- chance purity column reports the difference between observed and expected purity under chance\nFigure 3: t-SNE plots of object fc vectors color-coded by majority symbols assigned to them by informed sender. Object class names shown for a random subset. Left: configuration of 4th row ol Table1 Right: 2nd row of Table|2\nThe objects in our images were categorized into 20 broader categories (such as weapon and mammal). by[McRae et al.[(2005]. If the agents converged to higher level semantic meanings for the symbols,. we would expect that objects belonging to the same category would activate the same symbols, e.g.,. that, say, when the target images depict bayonets and guns, the sender would use the same symbol to refer to them, whereas cows and guns should not share a symbol..\nTo quantify this, we form clusters by grouping objects by the symbols that are most often activated when target images contain them. We then assess the quality of the resulting clusters by measuring their purity with respect to the McRae categories. Purity (Zhao & Karypis2003) is a standard measure of cluster \"quality'. The purity of a clustering solution is the proportion of category labels in the clusters that agree with the respective cluster majority category. This number reaches 100% for perfect clustering and we always compare the observed purity to the score that would be obtained from a random permutation of symbol assignments to objects. Table|1|shows that purity, while far from perfect, is significantly above chance in all cases. We confirm moreover that the informed sender is producing symbols that are more semantically natural than those of the agnostic one.\nRather than using hard clusters, we can also ask whether symbol usage reflects the semantics of the visual space. To do so we construct vector representations for each object (defined by its ImageNet label) by averaging the CNN fc representations of all category images in our data-set (see Section 3|above). Note that the fc layer, being near the top of a deep CNN, is expected to capture high level visual properties of objects (Zeiler & Fergusl2014). Moreover, since we average across many specific images, our vectors should capture rather general, high-level properties of objects.\nWe map these average object vectors to 2 dimensions via t-SNE mapping (Van der Maaten & Hinton 2008) and we color-code them by the majority symbol the sender used for images containing the corresponding object. Figure 3 (left) shows the results for the current experiment. We see that objects that are close in CNN space (thus, presumably, visually similar) are associated to the same symbol (same color). However, there still appears to be quite a bit of variation."}, {"section_index": "4", "section_name": "4.1 OBJECT-LEVEL REFERENCE", "section_text": "We established that our agents can solve the coordination problem, and we have at least tentative evidence that they do so by developing symbol meanings that align with our semantic intuition. We.\nbike bike ambuJance bopkease airplane ambulance banbpoka don bazooka: .... bayqnet bayonet airptane. : axe. barnaparnnent bomb barn axe. bationor aannaent .dreehyent haton : .. blackbird : .. aeagnion birch anchdi .. blendeanoskAtray boatjeehive ... aaonunen .. : hluebeana bathtyhskeahjender. alllgator .ap bapen: barrel barrel aentray .belt ba bear .'beet bolt apron Faspaangys balaun blouse armoupalldeeayer . bolt avocado armour : . beetle bison blouse . beaaver basement: birch bench blackbird banana beetle : .. blueberry aspaagus-beet begreom avocad alligator apple bathtub ... : bobkose : ... .. bison\nStill, surprisingly, purity is significantly above chance even when the latter is only using two sym- bols. From our qualitative evaluations, in this case the agents converge to a (noisy) characterization of objects as \"living-vs-non-living\"' which, intriguingly, has been recognized as the most basic one in the human semantic system (Caramazza & Shelton1998).\nTable 2: Playing the referential game with image-level targets: test results after 50K training plays Columns as in Table[1 All purity values significant at p < 0.001.\nturn now to a simple way to tweak the game setup in order to encourage the agents to further pursue high-level semantics.\nThe strategy is to remove some aspects of \"common knowledge\"' from the game. Common knowl- edge, in game-theoretic parlance, are facts that everyone knows, everyone knows that everyone knows, and so on (Brandenburger et al.[2014). Coordination can only occur if the basis of the coordination is common knowledge (Rubinstein1989), therefore if we remove some facts from common knowledge, we will preclude our agents from coordinating on them. In our case, we want to remove facts pertaining to the details of the input images, thus forcing the agents to coordinate on more abstract properties. We can remove all low-level common knowledge by letting the agents play only using class-level properties of the objects. We achieve this by modifying the game to show the agents different pairs of images but maintaining the ImageNet class of both the target and distractor (e.g., if the target is dog, the sender is shown a picture of a Chihuahua and the receiver that of a Boston Terrier)."}, {"section_index": "5", "section_name": "S GROUNDING AGENTS' COMMUNICATION IN HUMAN LANGUAGE", "section_text": "The results in Section4|show communication robustly arising in our game, and that we can chang. the environment to nudge agents to develop symbol meanings which are more closely related to th visual or class-based semantics of the images. Still, we would like agents to converge on a languag fully understandable by humans, as our ultimate goal is to develop conversational machines. To d this, we will need to ground the communication..\nTaking inspiration from AlphaGo (Silver et al.]2016), an AI that reached the Go master level by combining interactive learning in games of self-play with passive supervised learning from a large set of human games, we combine the usual referential game, in which agents interactively develop their communication protocol, with a supervised image labeling task, where the sender must learn to assign objects their conventional names. This way, the sender will naturally be encouraged to use such names with their conventional meaning to discriminate target images when playing the game making communication more transparent to humans.\nIn this experiment, the sender switches, equiprobably, between game playing and a supervised im age classification task using ImageNet classes. Note that the supervised objective does not aim at improving agents' coordination performance. Instead, supervision provides them with basic ground- ing in natural language (in the form of image-label associations), while concurrent interactive game playing should teach them how to effectively use this grounding to communicate.\nWe use the informed sender, fc image representations and a vocabulary size of 100. Supervised training is based on 100 labels that are a subset of the object names in our data-set (see Section3 above). When predicting object names, the sender uses the usual game-embedding layer coupled with a softmax layer of dimensionality 100 corresponding to the object names. Importantly, the game-embedding layers used in object classification and the reference game are shared. Conse-\nTable2reports results for various configurations. We see that the agents are still able to coordinate Moreover, we observe a small increase in symbol usage purity, as expected since agents can now only coordinate on general properties of object classes, rather than on the specific properties of each image. This effect is clearer in Figure[3|(right), when we repeat t-SNE based visualization of the. relationship that emerges between visual embeddings and the words used to refer to them in this new experiment.\ndolphin fence\nFigure 4: Example pairs from the ReferItGame set, with word produced by sender. Target images framed in green.\nquently, we hope that, when playing, the sender will produce symbols aligned with object name acquired in the supervised phase.\nEven more importantly, many symbols have now become directly interpretable, thanks to their direct correspondence to labels. Considering the 632 image pairs where the target gold standard label corresponds to one of the labels that were used in the supervised phase, in 47% of these cases the sender produced exactly the symbol corresponding to the correct supervised label for the target image (chance: 1%).\nFor image pairs where the target image belongs to one of the directly supervised categories, it is no surprising that the sender adopted the \"conventional' supervised label to signal the target . However a very interesting effect of supervision is that it improves the interpretability of the code even wher agents must communicate about images that do not contain objects in the supervised category set This emerged in a follow-up experiment in which, during training, the sender was again exposec (with equal probability) to the same supervised classification task as above, but now the agents played the referential game on a different dataset of images derived from ReferItGame (Kazemzadel et al.[2014). In its general format, the ReferItGame contains annotations of bounding boxes in rea images with referring expressions produced by humans when playing the game. For our purposes we constructed 1Ok pairs by randomly sampling two bounding boxes, to act as target and distractor Again, the agents converged to perfect communication after 15k trials, and this time used all 100 available symbols in some trial.\nWe then asked whether this language was human-interpretable. For each symbol used by the trained sender, we randomly extracted 3 image pairs in which the sender picked that symbol and the receiver pointed at the right target (for two symbols, only 2 pairs matched these criteria, leading to a set of 298 image pairs). We annotated each pair with the word corresponding to the symbol in the supervised set. Out of the 298 pairs, only 25 (8%) included one of the 100 words among the corresponding referring expressions in ReferItGame. So, in the large majority of cases, the sender had been faced with a pair not (saliently) containing the categories used in the supervised phase of its training, and it had to produce a word that could, at best, only indirectly refer to what is depicted in the target image. We then tested whether this code would be understandable by humans. In essence, it is as if we replaced the trained agent receiver with a human\nWe found that in 68% of the cases the subjects were able to guess the right image. A logistic. regression predicting subject image choice from ground-truth target images, with subjects and words. as random effects, confirmed the highly significant correlation between the true and guessed images\ndolphin fence\nThe supervised objective has no negative effect on communication success: the agents are still able to reach full coordination after 10k training trials (corresponding to 5k trials of reference game. playing). The sender uses many more symbols after training than in any previous experiment (88). and symbol purity dramatically increases to 70% (the obs-chance purity difference also increases to. 37%).\nWe prepared a crowdsourced survey using the CrowdFlower platform. For each pair, human partici. pants were shown the two images and the sender-emitted word (that is, the ImageNet label associated to the symbol produced by the sender; see examples in Figure 4). The participants were asked to pick the picture that they thought was most related to the word. We collected 10 ratings for each. pair.\n(z = 16.75, p < O.0001). Thus, while far from perfect, we find that supervised learning on a separate data set does provide some grounding for communication with humans, that generalizes beyond the conventional word denotations learned in the supervised phase\nLooking at the results qualitatively, we found that very often sender-subject communication suc ceeded when the sender established a sort of \"metonymic\"' link between the words in its possessior and the contents of an image. Figure 4 shows an example where the sender produced dolphin to refer to a picture showing a stretch of sea, and fence for a patch of land. Similar semantic shifts are a core characteristic of natural language (e.g., Pustejovsky1995), and thus subjects were, in many cases, able to successfully play the referential game with our sender (10/10 subjects guessed the dolphin target, and 8/10 the fence). This is very encouraging. Although the language developec in referential games will be initially very limited, if both agents and humans possess the sort of flexibility displayed in this last experiment, the noisy but shared common ground might suffice to establish basic communication."}, {"section_index": "6", "section_name": "REFERENCES", "section_text": "Adam Brandenburger, Eddie Dekel, et al. Hierarchies of beliefs and common knowledge. The Language of Game Theory: Putting Epistemics into the Mathematics of Games. 5:31. 2014\nAlfonso Caramazza and Jennifer Shelton. Domain-specific knowledge systems in the brain th animate-inanimate distinction. Journal of Cognitive Neuroscience. 10(1):1-34. 1998\nHerbert H Clark. Using language. 1996. Cambridge University Press: Cambridge), 952:274-296 1996.\nVincent Crawford. A survey of experiments on communication via cheap talk. Journal of Economi theory, 78(2):286-298, 1998\nOur results confirmed that fairly simple neural-network agents can learn to coordinate in a referential game in which they need to communicate about a large number of real pictures. They also suggest that the meanings agents come to assign to symbols in this setup capture general conceptual prop- erties of the objects depicted in the image, rather than low-level visual properties. We also showed. a path to grounding the communication in natural language by mixing the game with a supervised task.\nIn future work, encouraged by our preliminary experiments with object naming, we want to study. how to ensure that the emergent communication stays close to human natural language. Predictive learning should be retained as an important building block of intelligent agents, focusing on teaching them structural properties of language (e.g., lexical choice, syntax or style). However, it is also. important to learn the function-driven facets of language, such as how to hold a conversation, and interactive games are a potentially fruitful method to achieve this goal..\nAndreas Blume, Douglas V DeJong, Yong-Gwan Kim, and Geoffrey B Sprinkle. Experimental evidence on the evolution of meaning of messages in sender-receiver games. The American Eco nomic Review, 88(5):1323-1340, 1998.\nTed Briscoe (ed.). Linguistic evolution through language acquisition. Cambridge University Press Cambridge, UK, 2002\nJakob N. Foerster, Yannis M. Assael, Nando de Freitas, and Shimon Whiteson. Learning to communicate to solve riddles with deep distributed recurrent q-networks. Technical Repor arXiv:1602.02672,2016. URLhttp://arxiv.0rg/pdf/1602.02672v1\nEmilio Jorge, Mikael Kageback, and Emil Gustavsson. Learning to play guess who? and inventing a grounded language as a consequence. https://arxiv.org/abs/1611.03218, 2016.\nDavid Lewis. Convention. Harvard University Press. Cambridge. MA. 1969\nJames Pustejovsky. The Generative Lexicon. MIT Press, Cambridge, MA, 1995\nAlvin E Roth and Ido Erev. Learning in extensive-form games: Experimental data and simple dynamic models in the intermediate term. Games and economic behavior, 8(1):164-212, 1995\nAriel Rubinstein. The electronic mail game: Strategic behavior under 'almost common knowledge The American Economic Review, pp. 385-391, 1989\nDavid Silver, Aja Huang, Christopher Maddison, Arthur Guez, Laurent Sifre, George van der. Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lilli. crap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the. game of Go with deep neural networks and tree search. Nature, 529:484-503, 2016.\nKaren Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale imag recognition. arXiv preprint arXiv:1409.1556, 2014.\nBrian Skyrms. Signals: Evolution, learning, and information. Oxford University Press, 2010\nLuc Steels and Martin Loetzsch. The grounded naming game. In Luc Steels (ed.), Experiments in Cultural Language Evolution. pp. 41-59. John Benjamins. Amsterdam. 2012\nSahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara L Berg. Referitgame: Referring to objects in photographs of natural scenes. In EMNLP, pp. 787-798, 2014\nKen McRae, George Cree, Mark Seidenberg, and Chris McNorgan. Semantic feature production norms for a large set of living and nonliving things. Behavior Research Methods, 37(4):547-559, 2005.\nMatthew Spike, Kevin Stadler, Simon Kirby, and Kenny Smith. Minimal requirements for the emer gence of learned signaling. Cognitive Science, 2016. In press..\nPeter Stone and Manuela Veloso. Towards collaborative and adversarial learning: A case study ir Ohot1cs( :83-1041998\nRichard Sutton and Andrew Barto. Reinforcement Learning: An Introduction. MIT Press, Cam bridge, MA, 1998\nKyle Wagner, James A Reggia, Juan Uriagereka, and Gerald S Wilkinson. Progress in the simulatior of emergent communication and language. Adaptive Behavior, 11(1):37-69, 2003\nMichael Wooldridge. An introduction to multiagent systems. John Wiley & Sons, 2009.\nKelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In Proceedings of 1CML. pp. 2048-2057. Lille. France. 2015\nMatthew Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In Pro ceedings of ECCV (Part 1), pp. 818-833, Zurich, Switzerland, 2014\nLaurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-SNE. Journal of Machine Learning Research, 9(2579-2605), 2008.\nTerry Winograd. Procedures as a representation for data in a computer program for understanding natural language. Technical Report AI 235, Massachusetts Institute of Technology, 1971.\nYing Zhao and George Karypis. Criterion functions for document clustering: Experiments and. analysis. Technical Report O1-40, University of Minnesota Department of Computer Science 2003."}] |
ByBwSPcex | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Neural networks have revolutionized many fields. They have not only proven to be powerful in. performing perception tasks such as image classification and language understanding, but have also shown to be surprisingly good \"artists\". In Gatys et al. (2015), photos were turned into paintings by exploiting particular drawing styles such as Van Gogh's, Kiros et al (015) produced stories about images biased by writing style (e.g., romance books), Karpathy et al. (2016) wrote Shakespeare. inspired novels, and Simo-Serra et al. (015) gave fashion advice..\nMusic composition is another artistic domain where neural based approaches have been proposed. Early approaches exploiting Recurrent Neural Networks (Bharucha & Todd (1989); Mozer (1996); Chen & Miikkulainen (2001); Eck & Schmidhuber (2002)) date back to the 80's. The main varia tions between the different models is the representation of the notes and the outputs they produced which typically encode melody and chord. Most of these approaches were single track, in that they. produced only one note per time step. The exception is Boulanger-lewandowsk1 et al. (012) which generated polyphonic music, i.e., simultaneous independent melodies..\nIn this paper, we aim to generate pop music, where the melody but also chords and other instruments make up what is typically called a song. We draw inspiration from the Song from by Macdonald a piano video on Youtube, where the pleasing music is created from a sequence of digits of . This video shows both the randomness and the regularity of music. On one hand, since any possible digit sequence is a subset of the digit sequence, this implies that pleasing music can be created even from a totally random base signal. On the other hand, the composer uses specific rules such as A Harmonic Minor scale and harmonies to convert the digit sequence into a music sheet. It is these rules that play the key role in converting randomness into music.\nFollowing the ideas of Songs from , we aim to generate both the melody as well as accompanying effects such as chords and drums. Arguably, these turn even a not particularly pleasing melody int a well sounding song. We propose a hierarchical approach, where each level is a Recurrent Neura Network producing a key aspect of the song. The bottom layers generate the melody, while the higher levels produce drums and chords. This enables the drum and chord layers to compensate for the melody in order to produce appleasing music. Adopting the key idea from Songs from we condition our model on the scale type allowing the melody generator to learn the notes that are typically played in a particular scale."}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "We present a novel framework for generating pop music. Our model is a hierarchi- cal Recurrent Neural Network, where the layers and the structure of the hierarchy encode our prior knowledge about how pop music is composed. In particular, the oottom layers generate the melody, while the higher levels produce the drums and chords. We conduct several human studies that show strong preference of our gen erated music over that produced by the recent method by Google. We additionally show two applications of our framework: neural dancing and karaoke, as well as neural story singing\nWe train our model on 100 hours of midi music containing user-composed pop songs and vide. game music. We conduct human studies with music generated with our approach and compare i. against a recent approach by Google, showing that our songs are strongly preferred over the baseline. In our human study we also perform an ablation analysis of our model. We additionally show tw. new applications: neural dancing and karaoke as well as neural music singing. As part of the firs. application we generate a stickman dancing to our music and lyrics that can be sung with, while ii. the second application we condition on the output of Kiros et al (015) which writes a story about a1 image and convert it into a pop song. We refer the reader to http://www.cs.toronto.edu/songfrompi. for our demos and results.\nGenerating music has been an active research area for decades. It brings together machines learn ing researchers that aim to capture the complex structure of music (Eck & Schmidhuber(002) Boulanger-lewandowski et al. (2012)), as well as music professionals (Chan et al (006)) and en thusiasts (Johnson; Sun) that want to see how far a computer can get to be a real composer. Real-tim music generation is also explored for gaming (Engels et al. (2015)).\nEarly approaches mostly instilled knowledge from music theory into generation, by using rules o how music segments can be stitched together in a plausible way, e.g., Chan et al (2o06). On the other hand, neural networks have been used for music generation since the 80's (Bharucha & Tod. (1989); Mozer (1996); Chen & Miikkulainen (2001); Eck & Schmidhuber (2002)).Mozer (1996 used a Recurrent Neural Network that produced pitch, duration and chord at each time step. Unlik most other neural network approaches, this work encodes music knowledge into the representation Eck & Schmidhuber (2002) was first to use LSTMs to generate both melody and chord. Compare to Mozer (1996), the LSTM captured more global music structure across the song.\nLike us, Kang et al. (012) built upon the randomness of melody by trying to accompany it witl. drums. However, in their model the scale type is enforced. No details about the model are given, anc thus it is virtually impossible to compare to. Boulanger-lewandowski et al. (012) propose to learr. complex polyphonic musical structure which has multiple notes playing in parallel through the song. The model is single-track in that it only produces melody, whereas in our work we aim to produce. multi-track songs. Just recently, Huang & Wu (2016) proposed a 2-layer LSTM that, like Boulanger. lewandowski et al. (012), produces music that is more complex than a single note sequence, anc is able to produce chords. The main novelty of our work over existing approaches is a hierarchica. model that incorporates knowledge from music theory to build the neural architecture, and produces. multi-track pop music (melody, chord, drum). We also present two novel fun applications..\nWe start by introducing the basic notation and definitions from music theory. A note defines the basic unit that music is composed of. Music follows the 12-tone system, i.e., 12 is the cycle length of all notes. The 12 tones are: C, C/Db, D, D/Eb, E, F, F#/Gb, G, G/Ab, A, A#/Bb, B. A bar is a short segment of time that corresponds to a specific number of beats (notes). The boundaries. of the bar are indicated by vertical bar lines..\nScale is a subset of notes. There are four types of scales most commonly used: Major (Minor), Har-. monic Minor, Melodic Minor and Blues. Each scale type specifies a sequence of relative intervals. (or shifts) which act relative to the starting note. For example, the sequence for the scale type Major is 2 -> 2 -> 1 > 2 -> 2 -> 2 > 1. Thus, C Major specifies the starting note to be C, and applying the relative sequence of shifts yields: C 2> D 2, E > F 2> G 2> A 2> B > C. The subset of notes specified by C Major is thus C, D, E, F, G, A, and B (a subset of seven notes). All scales types. have a subset of seven notes except for Blues which has six. In total we have 48 unique scales, i.e 4 scale types and 12 possible starting notes. We treat Major and Minor as one type as for a Major. scale there is always a Minor that has exactly the same set of notes. In music theory, this is referred to as Relative Minor.\n16 -4 Y drm Y drm Y drm Y drm Drum Layer. -16 -8 4 chd chd chd chd Chord Layer. 16 8 2 prs prs prs ors prs prs prs Press Layer. -16 Y key. Y key Y key ykey key Y key Y key ker key Key Layer|s: 17 -16 t- 9 t- 4 t-3 X xt-2 t- X X X x X xt prf prf prf prf prf prf prf prf prf\nFigure 1: Overview of our framework. Only skip connections for the current time step t are plotted\nChord is a group of notes that sound good together. Similarly to scale, a chord has a start note and. a type defining a set of intervals. There are mainly 6 types in triads chords: Major Chord, Minor. Chord, Augmented Chord, Diminished Chord, Suspended 2nd Chord, and Suspended 4th Chord.\nThe Circle of Fifths is often used to produce a chord progression. It maps 12 chord starting notes to a circle. When changing from one chord to another chord, moving to a nearby chord on the circle. is often preferred as this forms a strong chord progression that produces the sense of harmony.."}, {"section_index": "2", "section_name": "HIERARCHICAL RECURRENT NETWORKS FOR POP MUSIC GENERATION", "section_text": "We follow the high level idea behind the Song from r to define our model. In particular, we gen. erate music with a hierarchical Recurrent Neural Network where the lavers and the structure of th hierarchy encode our prior knowledge about how pop music is composed. We first outline the mode. and describe the details and justifications for our choices in the subsections that follow..\nWe condition our generation on the scale type, as this helps the model to pick up the regularities ir pop songs. We encode melody with two random variables at each time step, representing which key is being played (the key layer) and the duration that the key will be pressed (the press layer). The melody is generated conditioned on the scale, which does not vary across the song as is typically the case in pop music. We assume the drums and the chords are independent given the melody. Thus conditioned on the melody, at each time step we generate the chord (the chord layer) as well as the drums (the drum layer). The output at all layers yields the final song. We refer the reader to Fig. for an illustration of our hierarchical model."}, {"section_index": "3", "section_name": "4.1 THE ROLE OF SCALE", "section_text": "It is known from music theory that while in principle each song has 12 tones to choose from, most o the notes are in fact only using the six (for Blues) or seven (for other scales) tone subsets specifie by the scale rule. We found that by conditioning the music generator on scale it captures thes. regularities more easily. However, we do not enforce the notes to be generated from the subset an allow our model to generate notes outside the scale..\nWe confirm the above musical fact by analysing over 100 hours of pop song music fron the midi man dataset. Since scale is defined relative to a starting note, we first try to factor ou its influence and normalize all songs to have identical start note. To identify the scale of a song, we compute the histogram over the 12 tones and match it with the 48 tone subsets of 4 scale types with 12 different start notes. We then normalize all songs to have start note C by applying a constant shifi on all notes. This allows us to categorize any song into 4 scale types. Since this shift affects all notes at once, it does not affect how the song sounds (its harmony). Our analysis shows that for all notes in all Major scale songs. 94.66% are within the tone subset. For Harmonic Minor. Melodic Minor\nt-16 8 4 Y drm Y drm Y drm Y drm Drum Layer t-16 -8 :: chd chd chd chd 1 Chord Layer. 17 -16 9 rs prs prs rs prs prs Press Layer. 17 t-16 9 t-8 4 Ykey 3 2 Y key Y key Ykey Y key. Y key Y key Ykey Ykey Key Layer[s t- 17 t-16 9 t-3 t-2 1 X X X X X prf prf prf prf pr f prf pr f prf prf\n0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 8.6 8.6 8.6 8.6 0.7 0.8 0.9 0.7 0.8 0.9 0.7 0.8 0.9 0.7 0.8 0.9 1 (a) (b) (c) (d)\nand Blues the percentage of notes that belong to the main tone set is 87.16%, 85.11%, and 90.93% respectively. We refer the reader to Fig. , where the x-axis denotes the percentage of within-scal notes of a song, and the y-axis indicates how many songs in the dataset have that percentage. Note that the majority of the notes follow the scale rule. Furthermore, different scale types have differen inlier distribution. We thus represent scale with a single random variable s E {1, ... , 4} which is fixed for the whole song, and condition the model on it."}, {"section_index": "4", "section_name": "4.2 TWO-LAYER RNN FOR MELODY GENERATION", "section_text": "We represent the melody with two random variables per time step: which key is pressed, and the. duration of the press. We use a RNN to generate the keys conditioned on the scale. Then conditionec on the output of the key layer, a second RNN generates the duration of the press at each time step..\nIn particular, we model the key layer with a two-layer LSTM (Hochreiter & Schmidhuber (1997) with a 512-dimensional hidden state, which outputs a note (key) at each time step. Note that we. condition on scale s, thus we have a different set of weights for each scale. We only allow notes. between C3 to C6 as notes outside this range are usually too low or too high to sound good. We. remind the reader that given a scale, seven (or six for blues) out of the twelve notes (per octave) are. statistically more plausible, however we allow the model to choose from all 12. This results in a. 37-dimensional output, as there are 36 possible notes corresponding to 3 octaves with 12 notes per. compute the probability of each key using the softmax:\nwhere Vyt is the row of V (the output embedding matrix of notes), corresponding to note y|\nAs input to the LSTM we use a vector that concatenates multiple features: a one-hot encoding of th previous generated note ykey, t=1, Lookback features, and the melody profile. The Lookback features were proposed by Google Magenta (Waite et al.) to make it easier for the model to memorize recently produced notes and potentially repeat them. They include skip connections from two anc one bar ago (a bar is 8 consecutively played notes), i.e., ykey6 t=16 and ykey: t-8.They also contain two additional features, indicating whether the last generated key has been copied from one or two bars t-1-8) and 1(y key, ykey ago, i.e. 1(y key, y key. a binary encoding of the current time t. This helps the model keep track where in a 4bar range i is, and thus produce music accordingly.\nIn addition, we introduce a new feature which we refer to as the melody profile. Intuitively, the profile represents the high-level music flow. To get the profile for each song, we compute the local. note histogram at each time step with width of two bars, and cluster all local histograms within the song into 10 clusters via k-means. We order the 10 clusters with mean note ordered from low to high as cluster 1 to 10, and apply moving averages on the cluster id sequence to encourage local smoothness. This results in a 10-dimensional one-hot vector representation of the cluster id for each time step. This additional information allows the user to set the melody's ups and downs of the song.\nThe keys alone are not sufficient to describe how the melody is performed. Additionally we also nee. to know the duration that each key needs to be pressed for. Towards this goal, conditioned on the\nFigure 2: Distribution of within-scale note ratio for four scale types. x-axis: percentage of tones within the scale type's tone set, y-axis: percentage of songs of the scale type. (a)-(d) shows Major(Minor), Harmonic Minor, Melodic Minor, and Blues, respectively.\nP(ykey) x exp(Vy key\n2For readers with musical background, the Twelve-Tone Serialism technique Schoenberg & Newlin (951l) prevents emphasis of any one tone. However, our data analysis indicates that pop music is not influenced by it\nFigure 3: Co-occurrence of tones in melody (y-axis) and chord (x-axis). (a)-(d) shows Major(Minor), Har monic Minor, Melodic Minor, and Blues, respectively.\nmelody, we generate the duration of each key with a two-layer LSTM with a 512-dimensional hidder. state. We represent the duration of pressing as a forward counting sequence that is conditioned ol the generated melody. The press outputs 1 when a new key is pressed, and sequentially outputs 2. 3, 4 and so on as the key is held on. When the current key is released, the press counter is reset tc. 1. Compared to the event on-off representation of Waite et al., our representation learns the melody. flow and how to press separately. This is important, as Waite et al. has extremely unbalanced outpu. encoding of the melody key y'key:"}, {"section_index": "5", "section_name": "4.3 CHORD AND DRUM RNN LAYERS", "section_text": "We look at our music dataset and find all unique drum patterns with duration of a half bar. We ther compute the histogram of all the patterns. This forms a long tail distribution, where 94.60% comes from the top 100 common patterns. We generate drum conditioned on the key layer using a two layer LSTM with 512 dimensional hidden states. Drum ydrm is represented as one-hot encoding. with of the 100 unique one-bar-long drum patterns. The input is ydrm t-4 concatenated with the notes from the previous three times steps yke3 t-3:t\nWe use cross-entropy as our loss function to train each layer. We follow the typical training strategy where we make predictions at each layer and time step but feed in ground-truth information to the next. This effectively decomposes training, and allows to train all layers in parallel. We use the Adam optimizer, a learning rate of 2e-3 and a learning rate decay of 0.99 after each epoch for 10 epochs.\nC# C# C# D D D D# D# D# E E E F F F' F' G G Gt Gt A A AR A# B B (a) (b) (c) (d)\nC C C C C# C# D D D# D# E F F' G G# A A# B B\nWe studied all existing chords in our 100 hours of pop music. Although in principle a chord can be any arbitrary combination of multiple notes, we observed that in the actual music data 99.19% of the chords belong to one of 72 chord classes (6 types 12 start notes). Fig. 3 shows the correlation between the melody's tone and the starting note of the chord playing at the same time. It can be seen that chord is strongly correlated with melody. These two findings inspire our design. We thus represent chord ychd as a one-hot encoding with 72 classes, and predict it using a two-layer LSTM. concatenated with yt-3:t\nTo synthesize music we first randomly choose a scale and a profile xprf. For generating xprf, we randomly choose one cluster id with a random duration, and repeat until we get the desired total length of the music sequence. We then perform inference in our model conditioned on the chosen scale, and use xprf as input to our key layer. At each time step, we sample a key according to P(ykey). We encode it as a one-hot vector and pass to the press, chord and drum layers. We sample the press, chords and drums at each time step in a similar fashion.\nNN\n2 7N77N7\nFigure 4: Example of our music generation. From top to bottom: melody, chord and drum respectively\nBefore putting the outputs across layers together, we further adjust the generated sequences at the bar level. For melody, we first check at each bar if the first step is a continuation of a previous note or silence. If it is the latter, we find the first newly pressed note within the bar and move it to the beginning of the bar. We do similarly for the windows of two half-bars as well as the four quarter bars. This makes the melody more likely to be on the beat, and generally sounds better. We verify this in our experiments.\nFor chord, we generate one chord at each half bar, which is the majority of all single step chord generations. Furthermore, we incorporate the rule of chord progression in the Circle of Fifths as between chords pairwise smooth terms, and compute the final chord using dynamic programming For drum, we generate one pattern at each half bar.\nOur model generates with scale starting note C, and then applies a constant shift to generate music. with other starting notes. Besides scale, which instrument to use is also customizable. However, we. simply set all instruments as grand piano in all experiments, as the effect and musical meaning of different instrument combinations is beyond the scope of this paper.."}, {"section_index": "6", "section_name": "5 EXPERIMENTS", "section_text": "To train our model, we took 100 hours of pop music from midi man which consists of user-composed. pop songs and video game music. In our generation, we always use 120 beats per minute with 4 time. steps per beat. However, songs in the dataset can have arbitrary speed. To neutralize the effect of. this, we detect the most frequent interval between two adjacent notes for each song, and iteratively divide or multiply this interval by 2 until it falls in the range between 0.25s and 0.5s. We use this as a measure of the song's beat duration. We then adjust the song's temporal axis so that all songs have the same beat duration of 0.5s.\nA MIDI file can be separated into different channels/tracks, where the 9th channel is specifically preserved for drums. We categorize the rest of non-drum tracks into melody, chord, and else, by simply setting thresholds on average number of unique notes within a bar and average number of note changing within a bar, as chords are by definition repetitive. Fig. shows an example of our music generation.\nTo evaluate the quality of our music generation, we conduct a human survey with 27 participants. All subjects are university students who did not have any prior knowledge about the content of ou. project. In the survey, participants are presented with several pairs of 30-second music clips, and are. asked to vote which clip in the pair sounds better. We gave no other information about what they. are listening to. They are also allow to submit a neutral vote in case they cannot decide between th. two choices. In our study, we consider three cases: our full method versus Magenta Waite et al., ou method with melody only versus Google Magenta (Waite et al.), and our method versus our metho. without the temporal alignment described in Sec.4.5. We randomly generated 10 songs per methoc. and randomly shuffled each pair. For the Magenta baseline we used its Lookback version, whicl. was the latest version at the time of our submission..\nAs shown in Table , most participants prefer songs produced by our method compared to Magenta Participants also made comments such as music sounds better with percussion than piano alone. and multiple instruments with continuous play is much better. This confirms that our multi-layer generation improves music quality. Few participants also point out that drums sound too different and do not participate to the melody perfectly, which indicates that further improvements can be still made. In the second comparison, we study if the quality improvement of our method is only caused\nTable 1: Human evaluation of music generated by different methods: ours and Waite et a's Magenta. Ours MO and Ours-NA are short for Ours Melody Only and Ours No Alignment. We allowed neutral votes, thus the. sum of the pair is less than 100%..\nTable 2: Evaluations of the longest matching sub-sequence with training. and self repeating times\nby adding chords and drums, or is also related to our two-layer melody generation with alignment. It. can be seen that without chords and drums, the score drops as expected, but is still much higher than the Magenta baseline. This is because our method produces less recursion and silence, and faster. and more accurate tempo as mentioned by the participants. In the last comparison, most participants prefer our full method than the no-alignment version, since beats are more subtle and better timed. This proves the usefulness of temporal alignment. We performed significance tests on the evaluation. results in Table 1. All comparisons passed the significance test with significance level 5%. The. lowest alpha values to reject the null hypothesis are 1e-19, 1e-14, and 1e-19, respectively. Further experimental results of removing music scale in our method and adding temporal alignment to the baseline can be found on our project page..\nTo examine the suitability of the four scale types we use, we collected the list of all existing musica. scales from Wikipedia and measured the scale distribution of the dataset. 37.8% of the data belongs. to our four scales, 47.7% belongs to Acoustic, Algerion, Lydian, Adonai Malakh, and Ukrainian while 14.5% belongs to the rest 31 uncommonly seen scales such as Insen, Iwato, Yo, and Enigmatic. We also found out that the five scales that accounts for 47.7% are either one or two degree away. from one of our used scales (all notes are the same except one being one or two steps away). This. experiment shows that even in the most rigorous musical setting, at least 85.5% of online songs are. very close to the four scales that we use.\nFinally we study our model's capabilities to generate new music. Towards this goal, we generated 100 sequences of 50 seconds of length using different random initializations. We perform two evaluations. First, for each sequence, we search for the longest sub-sequence of keys that matches part of the training data, and record its length. This evaluates how much the model copies the training data. Secondly, we break each generated melody into segments of 2-bars in length (inspired by common definition of music plagiarism). We then compare each segment to all segments in the rest of the 100 generated songs, and record the repeat time. This evaluates how much the model. repeats itself. For comparison, we repeat the same evaluation for the Magenta baseline, and human composed music. Table reports the results. It can be seen that our method performs similarly as Magenta in terms of copying (sub-seq). It is somewhat surprising that human composers in. fact tend to copy more from other songs, which indicates that both generation approaches can be further relaxed in terms copying. Our method is less likely to generate recurring melodies (repeat). compared to Magenta, and is closer to the statistics of human-produced songs.."}, {"section_index": "7", "section_name": "6.1 NEURAL DANCING AND KARAOKE", "section_text": "In our first application, we attempt to generate both music and a stickman dancing to it, as well a a sequence of karaoke-like text that people can sing along with. To learn the relationship betwee. music and dance, we download 1 hour of video from the game Just Dance, as well as the MIDI file. for songs included in the video from different sources. We use the method in Newell et al. (016 to track single-frame 2D human pose in the videos. We process the single-frame tracking result t ensure left-right body consistency through time, and then use the method of Zhou et al. (016) t convert the 2D pose sequence into 3D. Example results are shown in Fig. 5l We observe that ou. pose processing pipeline is able to extract reasonable human poses most of the time. However, th\nIn this section we demonstrate two novel applications of our pop music generation framework. We refer the reader to http://www.cs.toronto.edu/songfrompi/ for the music videos..\n(a) (b) (d)\nFigure 5: Examples from Just Dance and 3D human pose tracking result. (a) and (b) are success cases, pose tracking fails in (c), and (d) shows the defect in video which makes tracking difficult.\nquality is not perfect due to tracking failure or video effects. We define pose similarity as average euclidean distance of all joints, and cluster poses into 456 clusters. We used Frey & Dueck (007 as the number of clusters is large\nTo learn the relationship between music and lyrics, we collect 51 hours of lyrics data from the internet. This data contains 50 hours of text without music, and the rest 1 hour are songs we collected from Just Dance. For the music part, we temporally align each sentence in the lyrics with the midi music by using the widely-existing lrc format, which records the time tag at the beginning of every sentence. We select words that appear at least 4 times, which yields a vocabulary size of 3390 including unknown and end-of-sentence. Just as for dance, we generate one word per beat using another lyrics layer on top of the key layer."}, {"section_index": "8", "section_name": "6.2 NEURAL STORY SINGING", "section_text": "In this application our aim is to sing a song about a photo. We first generate a story about the. photo with the neural storyteller Kiros et al (2015) and try to accompany the generated text with. music. We utilize the same 1 hour dataset of temporally aligned lyrics and music. We further include the phoneme list of our 3390 vocabulary as we also want to sing the story. Starting from the text produced by neural storyteller, we arrange it into a temporal sequence with 1 beat per word and a short pause for end-of-sentence, where the pause length is decided such that the next sentence starts from a new bar. As our dataset is relatively small, we generate the profile conditioned on the text,. which has less dimensions compared to the key. This is done by a 2-layer LSTM that takes as input the generated profile at the last time step concatenated with a one-hot vector of the current word, and outputs the current profile. We then generate the song with our model given the generated profile The generated melody key is then used to decide on the pitch frequency of a virtual singer, assuming. the key-to-pitch correspondence of a grand piano. We further constrain that the singer's final pitch is. always in the range of E3 to G4, which we empirically found to be the natural pitch range. We then replace all words outside the vocabulary with the sound Ooh, and play the rendered singing with the. generated music."}, {"section_index": "9", "section_name": "CONCLUSION AND FUTURE WORK", "section_text": "In this paper, we show that incorporating knowledge from the music theory into the model, as well as capturing multiple aspects of music results in better sounding songs. However, generating appealing and interesting music that captures structure, rhythm, and mood is challenging, and there is an exciting road ahead to improve on these aspects in the future.\nWe learn to generate a stickman dancing by adding another dancing layer on top of the key layer just like for drum and chord. We generate one pose at each beat, which is equivalent to 4 time steps or 0.5 seconds in a 120 beat-per-minute music. In particular, we predict one of the 456 pose clusters using a linear projection layer followed by softmax. We use cross-entropy at each time step as our loss function. At inference time, we further apply moving average to temporally smooth the generated 3D pose sequence.\nWe have presented a hierarchical approach to pop song generation which exploits music theory in the model design. In contrast to past work, our approach is able to generate multi-track music. Our human studies shows the strength of our framework compared to an existing strong baseline. We additionally proposed two new applications: neural dancing & karaoke, and neural story singing."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Jamshed J. Bharucha and Peter M. Todd. Modeling the perception of tonal structure with neura nets. Computer Music Journal, 13(4):44 53, 1989.\nMichael Chan, John Potter, and Emery Schubert. Improving algorithmic music composition witl chinele 2006\nDouglas Eck and Juergen Schmidhuber. A first look at music composition using lstm recurrent neural networks. 2002\nSteve Engels, Fabian Chan, and Tiffany Tong. Automatic real-time music generation for games. I AIIDE Workshop, 2015.\nBrendan J Frey and Delbert Dueck. Clustering by passing messages between data points. volum 315, pp. 972-976, 2007.\nLeon A. Gatys, Alexander S. Ecker, and Matthias Bethge. A neural algorithm of artistic style. Ir arXiv:1508.06576, 2015.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8) 1735-1780, 1997.\nreprint arXiv:1606.04930, 2016\nAndrej Karpathy, Justin Johnson, and Li Fei-Fei. Visualizing and understanding recurrent networks In ICLR 2016 Workshop, 2016.\nReddit midi man. Midi collection. https: / /qoo. q1/ 4moEZ3J\nMichael C. Mozer. Neural network music composition by prediction: Exploring the benefits of psychoacoustic constraints and multi-scale processing. Connection Science, 6(2-3), 1996\nArnold Schoenberg and Dika Newlin. Style and idea. Technical report, Williams and Norgate London, 1951.\nEdgar Simo-Serra, Sanja Fidler, Francesc Moreno-Noguer, and Raquel Urtasun. Neuroaesthetics ir fashion: Modeling the perception of beauty. In CVPR, 2015..\nWikipedia. List of musical scales and modes. https : / /qoo. q1/ 5kvXLP\nXiaowei Zhou, Menglong Zhu, Spyridon Leonardos, Kosta Derpanis, and Kostas Daniilidis. Sparse ness meets deepness: 3d human pose estimation from monocular video. In CVPR, 2016\nNicolas Boulanger-lewandowski, Yoshua Bengio, and Pascal Vincent. Modeling temporal depen dencies in high-dimensional sequences: Application to polyphonic music generation and tran scription. In ICML, 2012.\nDaniel Johnson. Composing music with recurrent neural networks. ht tps : // qoo. q1/ YP 9QyR\nSemin Kang, Soo-Yol Ok, and Young-Min Kang. Automatic Music Generation and Machine Learn ing Based Eyaluation 436-443. Springer Berlin Heidelberg. 2012.\nRyan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Tor ralba, and Sanja Fidler. Skip-thought vectors. In NIPS, 2015\nAlejandro Newell, Kaiyu Yang, and Jia Deng. Stacked hourglass networks for human pose estima tion. In ECCV, 2016."}] |
Hk3mPK5gg | [{"section_index": "0", "section_name": "TRAINING AGENTFOR FIRST-PERSON SHOOTER GAME WITH ACTOR-CRITIC CURRICULUM LEARNING", "section_text": "Yuxin Wu\nCarnegie Mellon University"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Deep Reinforcement Learning has achieved super-human performance in fully observable environ ments, e.g., in Atari Games [Mnih et al. (2015)] and Computer Go [Silver et al. (2016)]. Recently Asynchronous Advantage Actor-Critic (A3C) [Mnih et al. (2016)] model shows good performance for 3D environment exploration, e.g. labyrinth exploration. However, in general, to train an agen in a partially observable 3D environment from raw frames remains an open challenge. Direct appli cation of A3C to competitive 3D scenarios, e.g. 3D games, is nontrivial, partly due to sparse anc long-term rewards in such scenarios.\nDoom is a 1993 First-Person Shooter (FPS) game in which a player fights against other computer controlled agents or human players in an adversarial 3D environment. Previous works on FPS AI [van Waveren (2001)] focused on using hand-tuned state machines and privileged information e.g., the geometry of the map, the precise location of all players, to design playable agents. Although state-machine is conceptually simple and computationally efficient, it does not operate like humar players, who only rely on visual (and possibly audio) inputs. Also, many complicated situations require manually-designed rules which could be time-consuming to tune.\nIn this paper, we train an AI agent in Doom with a framework that based on A3C with convolutional neural networks (CNN). This model uses only the recent 4 frames and game variables from the A] side, to predict the next action of the agent and the value of the current situation. We follow the curriculum learning paradigm [Bengio et al. (2009); Jiang et al. (2015)]: start from simple tasks anc. then gradually try harder ones. The difficulty of the task is controlled by a variety of parameters in Doom environment, including different types of maps, strength of the opponents and the desigr. of the reward function. We also develop adaptive curriculum training that samples from a varying distribution of tasks to train the model, which is more stable and achieves higher score than A30 with the same number of epoch. As a result, our trained agent, named F1, won the champion ir Track 1 of ViZDoom Competition ' by a large margin..\nThere are many contemporary efforts on training a Doom AI based on the VizDoom plat. form [Kempka et al. (2016)] since its release. Arnold [Lample & Chaplot (2016)] also uses game. frames and trains an action network using Deep Recurrent Q-learning [Hausknecht & Stone (2015)],. and a navigation network with DQN [Mnih et al. (2015)]. However, there are several important dif-. ferences. To predict the next action, they use a hybrid architecture (CNN+LSTM) that involves more complicated training procedure. Second, in addition to game frames, they require internal.\nhttp://vizdoom.cs.put.edu.pl/competition-cig-2016/results\nYuandong Tian\nFacebook AI Research"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "In this paper, we propose a new framework for training vision-based agent for First-Person Shooter (FPS) Game, in particular Doom. Our framework combines he state-of-the-art reinforcement learning approach (Asynchronous Advantage Actor-Critic (A3C) model [Mnih et al. (2016)l) with curriculum learning. Our model is simple in design and only uses game states from the AI side, rather than using opponents' information [Lample & Chaplot (2016)]. On a known map, our agent won 10 out of the 11 attended games and the champion of Track1 in ViZ- Doom AI Competition 2016 by a large margin, 35% higher score than the second place.\nEnvironment Exploration Reward / Next state Gradient --. Action a Tt,St+1 St (a|s;w) {(St,At,rt) V(s;wv) State Policy function Next State Trajectory Value function Actor Critic\nFigure 1: The basic framework of actor-critic model\ngame status about the opponents as extra supervision during training, e.g., whether enemy is present. in the current frame. IntelAct [Dosovitskiy & Koltun (2017)] models the Doom AI bot training in a. supervised manner by predicting the future values of game variables (e.g., health, amount of ammo etc) and acting accordingly. In comparison, we use curriculum learning with asynchronized actor. critic models and use stacked frames (4 most recent frames) and resized frames to mimic short-term memory and attention. Our approach requires no opponent's information, and is thus suitable as a. general framework to train agents for close-source games..\nIn VizDoom AI Competition 2016 at IEEE Computational Intelligence And Games (CIG) Confer ence?, our AI won the champion of Track1 (limited deathmatch with known map), and IntelAct won. the champion of Track2 (full deathmatch with unknown maps). Neither of the two teams attends the other track. Arnold won the second places of both tracks and CLYDE [Ratcliffe et al. (2017)] won the third place of Track1.\nActor-critic models [Barto et al. (1983): Sutton (1984): Konda & Tsitsiklis (1999); Grondman et al (2012)] aim to jointly estimate V(s) and (a[s): from the current state st, the agent explores the. environment by iteratively sampling the policy function (at St; w) and receives positive/negative. reward, until the terminal state or a maximum number of iterations are reached. The exploratior. gives a trajectory {(st, at, rt), (St+1, at+1, Tt+1), ...}, from which the policy function and value. function are updated. Specifically, to update the value function, we use the expected reward R. along the trajectory as the ground truth; to update the policy function, we encourage actions tha. lead to high rewards, and penalize actions that lead to low rewards. To determine whether an actior. leads to high- or low-rewarding state, a reference point, called baseline [Williams (1992)], is usuall. needed. Using zero baseline might increase the estimation variance. [Peters & Schaal (2008)] give. a way to estimate the best baseline (a weighted sum of cumulative rewards) that minimizes the. variance of the gradient estimation, in the scenario of episodic REINFORCE [Williams (1992)].\nIn actor-critic frameworks, we pick the baseline as the expected cumulative reward V(s) of the cur rent state, which couples the two functions V(s) and (a[s) together in the training, as shown ir. Fig. 1. Here the two functions reinforce each other: a correct (a[s) gives high-rewarding trajecto ries which update V(s) towards the right direction; a correct V(s) picks out the correct actions fo. r(a[s) to reinforce. This mutual reinforcement behavior makes actor-critic model converge faster but is also prone to converge to bad local minima, in particular for on-policy models that follow the. very recent policy to sample trajectory during training. If the experience received by the agent ir consecutive batches is highly correlated and biased towards a particular subset of the environment then both (a[s) and V(s) will be updated towards a biased direction and the agent may never sec\nThe goal of Reinforcement Learning (RL) is to train an agent so that its behavior maxi-. mizes/minimizes expected future rewards/penalties it receives from a given environment [Sutton & Barto (1998)]. Two functions play important roles: a value function V(s) that gives the expected. reward of the current state s, and a policy function (a|s) that gives a probability distribution on the. candidate actions a for the current state s. Getting the groundtruth value of either function would. largely solve RL: the agent just follows (a[s) to act, or jumps in the best state provided by V(s. when the number of candidate next states is finite and practically enumerable. However, neither is. trivial.\nFigure 2: Two maps we used in the paper. FlatMap is a simple square containing four pillars CIGTrack1 is the map used in Track1 in ViZDoom AI Competition (We did not attend Track2) Black dots are items (weapons, ammo, medkits, armors, etc).\nthe whole picture. To reduce the correlation of game experience, Asynchronous Advantage Actor Critic Model [Mnih et al. (2016)] runs independent multiple threads of the game environment in parallel. These game instances are likely uncorrelated, therefore their experience in combination would be less biased.\nFor on-policy models, the same mutual reinforcement behavior will also lead to highly-peaked (a[s) towards a few actions (or a few fixed action sequences), since it is always easy for both actor and critic to over-optimize on a small portion of the environment, and end up \"living in their own realities\". To reduce the problem, [Mnih et al. (2016)] added an entropy term to the loss to encourage diversity, which we find to be critical. The final gradient update rules are listed as follows:\nArchitecture. While [Mnih et al. (2016)] keeps a separate model for each asynchronous agen. and perform model synchronization once in a while, we use an alternative approach called Batch A3C, in which all agents act on the same model and send batches to the main process for gradien. descent optimization. The agents' models are updated after each gradient update. Note that th. contemporary work GA3C [Babaeizadeh et al. (2017)] also proposes a similar architecture. In thei architecture, there is a prediction queue that collects agents' experience and sends them to multipl predictors, and a training queue that collects experience to feed the optimization.."}, {"section_index": "3", "section_name": "DOOM AS A REINFORCEMENT LEARNING PLATFORM", "section_text": "In Doom, the player controls the agent to fight against enemies in a 3D environment (e.g., in a maze). The agent can only see the environment from his viewpoint and thus receives partial information upon which it makes decisions. On modern computers, the original Doom runs in thousands of frames per second, making it suitable as a platform for training AI agent. ViZDoom [Kempka et al. (2016)] is an open-source platform that offers programming interface to communicate with Doom engine, ZDoom?. From the interface, users can obtain current frames of the game, and control the agent's action. ViZDoom offers much flexibility, including:\nRich Scenarios. Many customized scenarios are made due to the popularity of the game, offering. a variety of environments to train from. A scenario consists of many components, including 2D. maps for the environment, scripts to control characters and events. Open-source tools, such as\nFlatMap CIGTrack1\nCIGTrack1\nCIGTrack\nw w + a(Rt- V(st))Vw,log(at|st)+ VwH((|st) wv wv -aVwv(Rt-V(st))\nww+a(Rt-V(st))Vwlog(at|st)+VwH(([st)) wvwv -aVwv(Rt-V(st)) t' is the expected discounted reward at time t and a, are the learni R"}, {"section_index": "4", "section_name": "SLADE4, are also widely available to build new scenarios. We built our customized map (Fig. 2(b)) for training.", "section_text": "Game variables. In addition to image frames, ViZDoom environment also offers many games vari. ables revealing the internal state of the game. This includes HEALTH, AMMO_? (agent's health and. ammunition), FRAG_COuNT (current score) and so on. ViZDoom also offers USER? variables that. are computed on the fly via scenario scripts. These USER? variables can provide more information of the agent, e.g., their spatial locations. Enemy information could also be obtained by modifying. ViZDoom [Lample & Chaplot (2016)]. Such information is used to construct a reward function, or as a direct supervision to accelerate training [Lample & Chaplot (2016)].\nBuilt-in bots. Built-in bots can be inserted in the battle. They are state machines with privileged in formation over the map and the player, which results in apparently decent intelligence with minimal computational cost. By competing against built-in bots, the agent learns to improve.\nEvaluation Criterion. In FPS games. to evaluate their strength, multiple AIs are placed to a scenaric for a deathmatch, in which every AI plays for itself against the remaining AIs. Frags per episode.. the number of kills minus the number of suicides for the agent in one round of game, is often used as a metric. An AI is stronger if its frags is ranked higher against others. In this work, we use an. episode of 2-minute game time (4200 frames in total) for all our evaluations unless noted otherwise..\nState S 0 0 0 3 c FC Conv+ReLU Policy function (as; w FC wy w, shared Regular Attention Game variables frames (RGB) frames (RGB) (Health and ammo) Value function V(s; wv)\nFigure 3: The network structure of the proposed model. It takes 4 recent game frames plus 4 recent attention frames as the input state s, and outputs a probability distribution (a[s) of the 6 discrete actions. The policy and value network share parameters.."}, {"section_index": "5", "section_name": "4.1 NETWORK ARCHITECTURE", "section_text": "We use convolutional neural networks to extract features from the game frames and then combine its output representation with game variables. Fig. 3 shows the network architecture and Tbl. 1 give. the parameters. It takes the frames as the input (i.e., the state s) and outputs two branches, one tha outputs the value function V(s) by regression, while the other outputs the policy function (s|a) by a regular softmax. The parameters of the two functions are shared before the branch.\nFor input, we use the most recent 4 frames plus the center part of them, scaled to the same size (120 120). Therefore, these centered \"attention frames\"' have higher resolution than regular game frames, and greatly increase the aiming accuracy. The policy network will give 6 actions, namely MOVE_FORWARD, MOVE_LEFT, MOVE_RIGHT, TURN_LEFT, TURN_RIGHT, and ATTACK We found other on-off actions (e.g., MOVE_BACKwARD) offered by ViZDoom less important. Af- ter feature extraction by convolutional network, game variables are incorporated. This includes the agent's Health (0-100) and Ammo (how many bullets left). They are related to AI itself and thus legal in the game environment for training, testing and ViZDoom AI competition."}, {"section_index": "6", "section_name": "4.2 TRAINING PIPELINE", "section_text": "Our training procedure is implemented with TensorFlow [Abadi et al. (2016)] and tensorpack5. We open 255 processes, each running one Doom instance, and sending experience (st, at,rt) to the\nParameters Description FlatMap CIGTrack1 living Penalize agent who just lives -0.008 / action health_loss Penalize health decrement -0.05 / unit ammo_loss Penalize ammunition decrement -0.04 / unit health_pickup Reward for medkit pickup. 0.04 / unit ammo_pickup Reward for ammunition pickup 0.15 / unit dist penalty Penalize the agent when it stays -0.03 / action dist_reward Reward the agent when it moves 9e-5 / unit distance. dist_penalty thres Threshold of displacement. 8 15 num_bots Number of built-in bots. 8 16\nTable 2: Parameters for different maps\nmain process which runs the training procedure. The main process collects frames from differen. game instances to create batches, and optimizes on these batches asynchronously on one or mor. GPUs using Eqn. 1 and Eqn. 2. The frames from different processes running independent game. instances, are likely to be uncorrelated, which stabilizes the training. This procedure is slightly. different from the original A3C, where each game instance collects their own experience and updates. the parameters asynchronously.\nDespite the use of entropy term, we still find that (:[s) is highly peaked. Therefore, during trajec. tory exploration, we encourage exploration by the following changes: a) multiply the policy output. of the network by an exploration factor (0.2) before softmax b) uniformly randomize the action for 10% random frames.\nWe use Adam [Kingma & Ba (2014)] with e = 10-3 for training. Batch size is 128, discount factor y = 0.99, learning rate = 10-4 and the policy learning rate = 0.08a. The model is trained from scratch. The training procedure runs on Intel Xeon CPU E5-2680v2 at 2. 80GHz, and 2 TitanX. GPUs. It takes several days to obtain a decent result. Our final model, namely the F1 bot, is trained for around 3 million mini-batches on multiple different scenarios.."}, {"section_index": "7", "section_name": "4.3 CURRICULUM LEARNING", "section_text": "When the environment only gives very sparse rewards, or adversarial, A3C takes a long time t converge to a satisfying solution. A direct training with A3C on the map CIGTrack1 with 8 built in bots does not yield sensible performance. To address this, we use curriculum learning [Bengic et al. (2009)] that trains an agent with a sequence of progressively more difficult environments. By varying parameters in Doom (Sec. 3), we could control its difficulty level.\nTable 3: Curriculum design for FlatMap. Note that enemy uses RocketLauncher except fo Class 0 (Pisto1).\nLayer # 1 2 3 4 5 6 7 C7x7x32s2 C7x7x64s2 MP3x3s2 C3x3x128 MP3x3s2 C3x3x192 FC1024\nTable 1: Network parameters. C7x7x32s2 = convolutional layer with 7x7 kernel, stride 2 and number of output planes 32. MP = MaxPooling. Each convolutional and fully connected layer is followed by a ReLU, except for the last output layer..\nAs mentioned in [Kempka et al. (2016)], care should be taken for frame skips. Small frame skip introduces strong correlation in the training set, while big frame skip reduces effective training samples. We set frame skip to be 3. We choose 640x480 as the input frame resolution and do not use high aspect ratio resolution [Lample & Chaplot (2016)] to increase the field of view.\nClass 0 Class 1 Class 2 Class 3 Class 4 Class 5 Class 6 Class 7 Speed 0.2 0.2 0.4 0.4 0.6 0.8 0.8 1.0 Health 40 40 40 60 60 60 80 100\nReward Shaping. Reward shaping has been shown to be an effective technique to apply reinforce ment learning in a complicated environment with delayed reward [Ng et al. (1999); Devlin et al (2011)]. In our case, besides the basic reward for kills (+1) and death (-1), intermediate reward. are used as shown in Tbl. 2. We penalize agent with a living state, encouraging it to explore anc. encounter more enemies. health_loss and ammo_loss place linear reward for a decrement o health and ammunition. ammo_pickup and health_pickup place reward for picking up thes. two items. In addition, there is extra reward for picking up ammunition when in need (e.g. almos. out of ammo). dist_penalty and dist_reward push the agent away from the previous loca. tions, encouraging it to explore. The penalty is applied every action, when the displacement of the. bot relative to the last state is less than a threshold dist_penalty_thres. And dist_rewarc is applied for every unit displacement the agent makes. Similar to [Lample & Chaplot (2016)], th displacement information is computed from the ground truth location variables provided by Doon. engine, and will not be used in the competition. However, unlike [Lample & Chaplot (2016)] tha uses enemy-in-sight signal for training, locations can be extracted directly from UsER? variables or can easily be computed roughly with action history..\nCurriculum Design. We train the bot on FlatMap that contains a simple square with a few pil. lars (Fig. 2(a)) with several curricula (Tbl. 3), and then proceed to CIGTrack1. For each map,. we design curricula by varying the strength of built-in bots, i.e., their moving speed, initial health. and initial weapon. Our agent always uses RocketLauncher as its only weapon. Training on F1atMap leads to a capable initial model which is quickly adapted to more complicated maps. As shown in Tbl. 2, for CIGTrack1 we increase dist_pena1ty_thres to keep the agent moving. and increase num_bot s so that the agent encounters more enemies per episode..\nAdaptive Curriculum. In addition to staged curriculum learning, we also design adaptive curricu lum learning by assigning a probability distribution on different levels for each thread that runs : Doom instance. The probability distribution shifts towards more difficult curriculum when the agen performs well on the current distribution, and shifts towards easier level otherwise. We consider th agent to perform well if its frag count is greater than 10 points."}, {"section_index": "8", "section_name": "4.4 POST-TRAINING RULES", "section_text": "For a better performance in the competition, we also put several rules to process the action give by the trained policy network, called post-training (PT) rules. There are two sets of buttons ir ViZDoom: on-off buttons and delta buttons. While on-off button maps to the binary states of keystroke (e.g., pressing the up arrow key will move the agent forward), delta buttons mimic the mouse behavior and could act faster in certain situations. Therefore, we setup rules that detect th intention of the agent and accelerate with delta button. For example, when the agent turns by invok ing TURN_LEFT repeatedly, we convert its action to TURN_LEFT_RIGHT_DELTA for acceleration Besides, the trained model might get stuck in rare situations, e.g., keep moving forward but blockec by an explosive bucket. We also designed rules to detect and fix them."}, {"section_index": "9", "section_name": "5 EXPERIMENT", "section_text": "In this section, we show the training procedure (Sec. 5.1), evaluate our AIs with ablation analysis. (Sec. 5.2) and ViZDoom AI Competition (Sec. 5.3). We mainly compare among three AIs: (1) F1Pre, the bot trained with FlatMap only, (2) F1Plain, the bot trained on both FlatMap and. CIGTrack1, but without post-training rules, and (3) the final F1 bot that attends competition.."}, {"section_index": "10", "section_name": "5.1 CURRICULUM LEARNING ON FLA TMAP", "section_text": "Fig. 4 shows that the curriculum learning increases the performance of the agents over all levels When an agent becomes stronger in the higher level of class, it is also stronger in the lower level of class without overfitting. Fig. 5 shows comparison between adaptive curriculum learning with pure A3C. We can see that pure A3C can learn on F1atMap but is slower. Moreover, in CIGTrack1, a direct application of A3C does not yield sensible performance.\n30 5 Model 0 25 4 Model 1 sheay 20 3 Model 2 15 Model 3 2 Model 4 10 Model 5 1 5 Model 6 0 0 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 Class Class\n30 50 25 40 20 fheay 30 15 20 10 10 5 A3C A3C Adaptive Curriculum Adaptive Curriculum 0 0 100 200 300 400 500 100 200 300 400 500 Epoch\nFigure 5: Performance comparison on Class 7 (hardest) of F1atMap between A3C [Mnih et al (2016)] and adaptive curriculum learning, at different stage of training. Average frags and max frags are computed from 100 episodes. Adaptive curriculum shows higher performance and is relatively more stable."}, {"section_index": "11", "section_name": "5.2 ABLATION ANALYSIS", "section_text": "Visualization. Fig. 6 shows the visualization of the first convolutional layer of the trained AI agent. We could see that the convolutional kernels of the current frame is less noisy than the kernels of previous frames. This means that the agent makes the most use of the current frames..\nEffect of History Frames. Interestingly, while the agent focuses on the current frame, it also uses. motion information. For this, we use (1) 4 duplicated current frames (2) 4 recent frames in reverse order, as the input. This gives 8.50 and 2.39 mean frags, compared to 10.34 in the normal case. showing that the agent heavily uses the motion information for better decision. In particular, the bot. is totally confused with the reversed motion feature. Detailed results are shown in Tbl. 5..\nt - 3 t -2 t-1 +\nFigure 6: Visualization of the convolutional filters in the first layer of our network. The filters are. grouped by the frame index they corresponds to. Each group consists of two rows of 32 RGB filters. for the regular and attention frames, respectively. The filters corresponding to the current frame (last. row) is less noisy than those of others, showing that the bot is more focused on the current frame.\nFigure 4: Average Frags over 300 episodes evaluation, on F1atMap(left) and CIGTrack1(right with different levels of enemies (See Tbl. 3 for curriculum design). Models from later stages per-. forms better especially on the difficult map, yet still keeps a good performance on the easier map..\n30 50 25 40 20 trag 30 15 xx 20 V 10 10 5 A3C A3C Adaptive Curriculum Adaptive Curriculum 0 0 0 100 200 300 400 500 0 100 200 300 400 500 Epoch Epoch\nTable 4: Avg/Max frags of each AIs in the internal tournament (150 episodes of 10 minutes each)\nFlatMap CIGTrackl Min Mean Max Min Mean Ma F1 bot (reverse history) 1 9.89 19 -2 2.39 9 F1 bot (duplicated history) 10 24.62 37 2 8.50 17 F1 bot (w/o PT rules) 14 22.80 36 1 8.66 18 F1 bot 16 25.17 37 5 10.34 17\nFlatMap CIGTrack1 Min Mean Max Min Mean Max F1 bot (reverse history) 1 9.89 19 -2 2.39 9 F1 bot (duplicated history) 10 24.62 37 2 8.50 17 F1 bot (w/o PT rules) 14 22.80 36 1 8.66 18 F1 bot 16 25.17 37 5 10.34 17\nTable 5: Performance evaluation (in terms of frags) on two standard scenarios FlatMap and CIGTrack1 over 300 episodes. Our bot performs better with post-training rules.\nPost-training Rules. Tbl. 5 shows that the post-training rules improve the performance. As a futur work, an end-to-end training involving delta buttons could make the bot better.\nInternal Tournament. We also evaluate our AIs with internal tournaments (Tbl. 4). All our bots beat the performance of built-in bots by a large margin, even though they use privileged information F1Pre, trained with only FlatMap, shows decent performance, but is not as good as the models trained with both FlatMap and CIGTrack1. The final bot F1 performs the best.\nBehaviors. Visually. the three bots behave differently. F1Pre is a bit overtrained in F1atMap anc does not move too often, but when it sees enemies, even faraway, it will start to shoot. Occasionally it will move to the corner and pick medkits. In CIGTrack1, F1Pre stays in one place and ambushes opponents who pass by. On the other hand, F1Plain and F1 always move forwards and turn at the corner. As expected, F1 moves and turns faster.\nTactics All bots develop interesting local tactics when exchanging fire with enemy: they slide around when shooting the enemy. This is quite effective for dodging others' attack. Also when they shoo the enemy, they usually take advantage of the splashing effect of rocket to cause additional damage for enemy, e.g., shooting the wall when the enemy is moving. They do not pick ammunition toc often, even if they can no longer shoot. However, such disadvantage is mitigated by the nature of deathmatch: when a player dies, it will respawn with ammunition. We also check states with highest/lowest estimated future value V(s) over a 10-episode evaluation of F1 bot, from which we can speculate its tactics. The highest value is V = 0.97 when the agent fired, and about to hit the enemy. One low value is V = -0.44, ammo = 0, when the agent encountered an enemy at the corner but is out of ammunition. Both cases are reasonable."}, {"section_index": "12", "section_name": "5.3 COMPETITION", "section_text": "Table 6: Top 3 teams in ViZDoom AI Competition, Track 1. Our bot attended 11 out of 12 games won 10 of them and won the champion by a large margin. For design details, see Arnold [Lample & Chaplot (2016)] and CLYDE [Ratcliffe et al. (2017)].\nWe attended the ViZDoom AI Competition hosted by IEEE CIG. There are 2 tracks in the compe- tition. Track 1 (Limited Deathmatch) uses a known map and fixed weapons, while Track 2 (Full Deathmatch) uses 3 unknown maps and a variety of weapons. Each bot fights against all others for 12 rounds of 10 minutes each. Due to server capacity, each bot skips one match in the first 9 rounds. All bots are supposed to run in real-time (>35fps) on a GTX960 GPU.\nRound 1 2 3 4 5 6 7 8 9 10 11 12 Total Our bot 56 62 n/a 54 47 43 47 55 50 48 50 47 559 Arnold 36 34 42 36 36 45 36 39 n/a 33 36 40 413 CLYDE 37 n/a 38 32 37 30 46 42 33 24 44 30 393\nOur F1 bot won 10 out of 11 attended games and won the champion for Track 1 by a large margin. We have achieved 559 frags, 35.4% higher than 413 frags achieved by Arnold [Lample & Chaplot (2016)], that uses extra game state for model training. On the other hand, IntelAct [Dosovitskiy &. Koltun (2017)] won Track 2. The full videos for the two tracks have been released67, as well as. an additional game between Human and AIs'. Our bot behaves reasonable and very human-like in. Track 1. In the match between Human and AIs, our bot was even ahead of the human player for a short period (6:30 to 7:00).\nTeaching agents to act properly in complicated and adversarial 3D environment is a very challeng. ing task. In this paper, we propose a new framework to train a strong AI agent in a First-Person Shooter (FPS) game, Doom, using a combination of state-of-the-art Deep Reinforcement Learning and Curriculum Training. Via playing against built-in bots in a progressive manner, our bot wins. the champion of Track1 (known map) in ViZDoom AI Competition. Furthermore, it learns to use motion features and build its own tactics during the game, which is never taught explicitly..\nCurrently, our bot is still an reactive agent that only remembers the last 4 frames to act. Ideally, a. bot should be able to build a map from an unknown environment and localize itself, is able to have a global plan to act, and visualize its reasoning process. We leave them to future works.."}, {"section_index": "13", "section_name": "REFERENCES", "section_text": "Babaeizadeh, Mohammad, Frosio, Iuri, Tyree, Stephen, Clemons, Jason, and Kautz, Jan. Reinforce. ment learning through asynchronous advantage actor-critic on a gpu. International Conferenc on Learning Representations (ICLR), 2017.\nDevlin, Sam, Kudenko, Daniel, and Grzes, Marek. An empirical study of potential-based reward shaping and advice in complex, multi-agent systems. Advances in Complex Systems, 14(02): 251-278, 2011.\nGrondman, Ivo, Busoniu, Lucian, Lopes, Gabriel AD, and Babuska, Robert. A survey of actor-critic reinforcement learning: Standard and natural policy gradients. IEEE Transactions on Systems Man, and Cybernetics, Part C (Applications and Reviews), 42(6):1291-1307, 2012\nAbadi, Martin, Agarwal, Ashish, Barham, Paul, Brevdo, Eugene, Chen, Zhifeng, Citro, Craig, Cor. rado, Gregory S., Davis, Andy, Dean, Jeffrey, Devin, Matthieu, Ghemawat, Sanjay, Goodfellow. Ian J., Harp, Andrew, Irving, Geoffrey, Isard, Michael, Jia, Yangqing, Jozefowicz, Rafal, Kaiser. Lukasz, Kudlur, Manjunath, Levenberg, Josh, Mane, Dan, Monga, Rajat, Moore, Sherry, Murray,. Derek Gordon, Olah, Chris, Schuster, Mike, Shlens, Jonathon, Steiner, Benoit, Sutskever, Ilya Talwar, Kunal, Tucker, Paul A., Vanhoucke, Vincent, Vasudevan, Vijay, Viegas, Fernanda B.. Vinyals, Oriol, Warden, Pete, Wattenberg, Martin, Wicke, Martin, Yu, Yuan, and Zheng, Xiao. qiang. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. CoRR. abs/1603.04467,2016. URLhttp://arxiv.0rg/abs/1603.04467.\nHausknecht, Matthew J. and Stone, Peter. Deep recurrent q-learning for partially observable mdps CoRR,abs/1507.06527,2015. URL http://arxiv.0rg/abs/1507.06527.\nJiang, Lu, Meng, Deyu, Zhao, Qian, Shan, Shiguang, and Hauptmann, Alexander G. Self-pacec curriculum learning. In AAAI, volume 2, pp. 6, 2015..\nKingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. arXiv preprin arXiv:1412.6980, 2014.\nKonda, Vijay R and Tsitsiklis, John N. Actor-critic algorithms. In NIPS, volume 13, pp. 1008-1014 1999.\nMnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Rusu, Andrei A, Veness, Joel, Bellemare. Marc G, Graves, Alex, Riedmiller, Martin, Fidjeland, Andreas K, Ostrovski, Georg, et al. Human level control through deep reinforcement learning. Nature, 518(7540):529-533, 2015..\nSutton, Richard S and Barto, Andrew G. Reinforcement learning: An introduction, volume 1. 1998\nSutton, Richard Stuart. Temporal credit assignment in reinforcement learning. 1984.\nvan Waveren, J.M.P. The Quake III Arena bot. University of Technology Delft, 2001\nWilliams, Ronald J. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229-256, 1992.\nKempka, Michat, Wydmuch, Marek, Runc, Grzegorz, Toczek, Jakub, and Jaskowski, Wojciech Vizdoom: A doom-based ai research platform for visual reinforcement learning. arXiv preprint arXiv:1605.02097, 2016\nMnih, Volodymyr, Badia, Adria Puigdomenech, Mirza, Mehdi, Graves, Alex, Lillicrap, Timothy P Harley, Tim, Silver, David, and Kavukcuoglu, Koray. Asynchronous methods for deep reinforce ment learning. arXiv preprint arXiv:1602.01783, 2016.\nNg, Andrew Y, Harada, Daishi, and Russell, Stuart. Policy invariance under reward transformations Theory and application to reward shaping. In ICML, volume 99, pp. 278-287, 1999\nPeters, Jan and Schaal, Stefan. Reinforcement learning of motor skills with policy gradients. Neural networks, 21(4):682-697, 2008\nRatcliffe. D.. Devlin, S., Kruschwitz, U., and Citi, L. Clyde: A deep reinforcement learning doom playing agent. AAAI Workshop on What's next for AI in games, 2017.."}] |
Bkul3t9ee | [{"section_index": "0", "section_name": "UNSUPERVISED PERCEPTUAL REWARDS FOR IMITATION LEARNING", "section_text": "Pierre Sermanet, Kelvin Xu* & Sergey Levine\nsermanet,kelvinxx,slevine}@google.com"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Reward function design and exploration time are arguably the biggest obstacles t the deployment of reinforcement learning (RL) agents in the real world. In man real-world tasks, designing a suitable reward function takes considerable mar ual engineering and often requires additional and potentially visible sensors to b installed just to measure whether the task has been executed successfully. Fu thermore, many interesting tasks consist of multiple steps that must be execute in sequence. Even when the final outcome can be measured, it does not necessa ily provide useful feedback on these implicit intermediate steps or sub-goals. To address these issues, we propose leveraging the abstraction power of interme diate visual representations learned by deep models to quickly infer perceptu reward functions from small numbers of demonstrations. We present a methc that is able to identify the key intermediate steps of a task from only a handf of demonstration sequences, and automatically identify the most discriminativ features for identifying these steps. This method makes use of the features in pre-trained deep model, but does not require any explicit sub-goal supervisioi The resulting reward functions, which are dense and smooth, can then be used b an RL agent to learn to perform the task in real-world settings. To evaluate th learned reward functions, we present qualitative results on two real-world task and a quantitative evaluation against a human-designed reward function. We als demonstrate that our method can be used to learn a complex real-world door oper ing skill using a real robot, even when the demonstration used for reward learnin is provided by a human using their own hand. To our knowledge, these are the fir results showing that complex robotic manipulation skills can be learned directl and without supervised labels from a video of a human performing the task."}, {"section_index": "2", "section_name": "INTRODUCTION", "section_text": "Social learning, such as imitation, plays a critical role in allowing humans and animals to quickly acquire complex skills in the real world. Humans can use this weak form of supervision to ac- quire behaviors from very small numbers of demonstrations, in sharp contrast to deep reinforcement learning (RL) methods, which typically require extensive training data. In this work, we make use of two ideas to develop a scaleable and efficient imitation learning method: first, imitation makes use of extensive prior knowledge to quickly glean the \"gist'' of a new task from even a small numbei of demonstrations; second, imitation involves both observation and trial-and-error learning (RL) Building on these ideas, we propose a reward learning method for understanding the intent of a user demonstration through the use of pre-trained visual features, which provide the \"prior knowledge for efficient imitation. Our algorithm aims to discover not only the high-level goal of a task, but alsc the implicit sub-goals and steps that comprise more complex behaviors. Extracting such sub-goals can allow the agent to make maximal use of the information contained in the demonstration. Once the reward function has been extracted, the agent can use its own experience at the task to deter mine the physical structure of the behavior, even when the reward is provided by an agent with a substantially different embodiment (e.g. a human providing a demonstration for a robot).\n* Work done as part of the Google Brain Residency program (g.co/brainresidency)\nFigure 1: Method overview. Given a few demonstration videos of the same action, our method discover intermediate steps, then trains a classifier for each step on top of the mid and high-level representations of pre-trained deep model (in this work, we use all activations starting from the first \"mixed' layer that follow the first 5 convolutional layers). The step classifiers are then combined to produce a single reward function pe step. These intermediate rewards are combined into a single reward function. The reward function is then use by a real robot to learn the perform the demonstrated task as show in 3.2.\nTo our knowledge, our method is the first reward learning technique that learns generalizable vision. based reward functions for complex robotic manipulation skills from only a few demonstrations provided directly by a human. Although prior methods have demonstrated reward learning with. vision for real-world robotic tasks, they have either required kinesthetic demonstrations with robot state for reward learning (Finn et al., 2015), or else required low-dimensional state spaces and nu- merous demonstrations (Wulfmeier et al., 2016). The contributions of this paper are:."}, {"section_index": "3", "section_name": "1.1 RELATED WORK", "section_text": "Deep reinforcement learning and deep robotic learning work has previously examined learning re ward functions based on images. One of the most common approaches to image-based reward functions is to directly specify a \"target image\"' by showing the learner the raw pixels of a success- ful task completion state, and then using distance to that image (or its latent representation) as a reward function (Lange et al., 2012; Finn et al., 2015; Watter et al., 2015). However, this approach\nDemonstrator (human or robot) Few demonstrations general Unsupervised discovery of intermediate steps. high-level features pretrained deep model Feature selection maximizing step discrimination across all videos (e.g. Inception) OFFLINE COMPUTATIQN REAL ROBOT Real-time perceptual reward for multiple intermediate steps. Learning agent with Reinforcement Learning\nA method for perceptual reward learning from only a few demonstrations of real-world. tasks. Reward functions are dense and incremental, with automated unsupervised discov-. ery of intermediate steps. The first vision-based reward learning method that can learn a complex robotic manip. ulation task from a few human demonstrations in real-world robotic experiments.. A set of empirical experiments that show that the learned yisual representations inside a pre-trained deep model are general enough to be directly used to represent goals and sub-. goals for manipulation skills in new scenes without retraining..\nhas several severe shortcomings. First, the use of a target image presupposes that the system car achieve a substantially similar visual state, which precludes generalization to semantically similar but visually distinct situations. Second, the use of a target image does not provide the learner with information about which facet of the image is more or less important for task success, which might result in the learner excessively emphasizing irrelevant factors of variation (such as the color of a door due to light and shadow) at the expense of relevant factors (such as whether or not the door is open or closed). Analyzing a collection of demonstrations to learn a parsimonious reward func- tion that explains the demonstrated behavior is known as inverse reinforcement learning (IRL) (Ng et al., 2000). A few recently proposed IRL algorithms have sought to combine IRL with vision and deep network representations (Finn et al., 2016b; Wulfmeier et al., 2016). However, scaling IRL tc high-dimensional systems and open-ended reward representations is very challenging. The previous work closest to ours used images together with robot state information (joint angles and end effector pose), with tens of demonstrations provided through kinesthetic teaching (Finn et al., 2016b). The approach we propose in this work which can be interpreted as a simple and efficient approximation to IRL, can use demonstrations that consist of videos of a human performing the task using their own body, and can acquire reward functions with intermediate sub-goals using just a few examples. This kind of efficient vision-based reward learning from videos of humans has not been demonstrated in prior IRL work. The idea of perceptual reward functions using raw pixels was also explored by Ed wards et al. (2016) which, while sharing the same spirit as this work, was limited to simple synthetic tasks and used single images as perceptual goals rather than multiple demonstration videos."}, {"section_index": "4", "section_name": "SIMPLE INVERSE REINFORCEMENT LEARNING WITH VISUAL FEATURES", "section_text": "The key insight in our approach is that we can exploit the semantically meaningful and powerful. features in a pre-trained deep neural network to infer task goals and sub-goals using a very simple approximate inverse reinforcement learning method. The pre-trained network effectively transfers prior knowledge about the visual world to make imitation learning fast and robust. Our approach can. be interpreted as a simple approximation to inverse reinforcement learning under a particular choice of system dynamics, as discussed in Section 2.1. While this approximation is somewhat simplistic. it affords an efficient and scaleable learning rule that avoids overfitting even when trained on a small number of demonstrations. As depicted in Fig. 1, our algorithm first segments the demonstrations into segments based on perceptual similarity, as described in Section 2.2. Intuitively, the resulting. segments correspond to sub-goals or steps of the task. The segments can then be used as a supervi-. sion signal for learning steps classifiers, described in Section 2.3, which produces a single perception. reward function for each step of the task. The combined reward function can then be used with a reinforcement learning algorithm to learn the demonstrated behavior. Although this method for ex-. tracting reward functions is exceedingly simple, its power comes from the use of highly general and robust pre-trained visual features, and our key empirical result is that such features are sufficient tc. acquire effective and generalizable reward functions for real-world manipulation skills..\nWe use the Inception network (Szegedy et al., 2015) pre-trained for ImageNet classification (Deng. et al., 20o9) to obtain the visual features for representing the learned rewards. It is well known that visual features in such networks are quite general and can be reused for other visual tasks. However, it is less clear if sparse subsets of such features can be used directly to represent goals and sub goals for real-world manipulation skills. Our experimental evaluation suggests that indeed they can, and that the resulting reward representations are robust and reliable enough for real-world robotic learning without any finetuning of the features. In this work, we use all activations starting from the. first \"mixed' layer that follows the first 5 convolutional layers (this layer's activation map is of size 35x35x256 given a 299x299 input). While this paper focuses on visual perception, the approach is general and can be applied to other modalities (e.g. audio and tactile).."}, {"section_index": "5", "section_name": "2.1 INVERSE REINFORCEMENT LEARNING WITH TIME-INDEPENDENT GAUSSIAN MODELS", "section_text": "Inverse reinforcement learning can be performed with a variety of algorithms (Ng et al., 2000) ranging from margin-based methods (Abbeel & Ng, 2004; Ratliff et al., 2006) to methods based on probabilistic models (Ramachandran & Amir, 2007; Ziebart et al., 2008). In this work, we use a very simple approximation to the MaxEnt IRL model (Ziebart et al., 2008), a popular probabilistic approach to IRL. We will use st to denote the visual feature activations at time t, which constitute the state, sit to denote the ith feature at time t, and t = {$1, ..., ST} to denote a sequence or trajectory of these activations in a video. In MaxEnt IRL, the demonstrated trajectories t are assumed to be\ndrawn from a Boltzmann distribution according to\nwhere R(st ) is the unknown reward function. The principal computational challenge in MaxEnt IRI is to approximate Z, since the states at each time step are not independent, but are constrained by the system dynamics. In deterministic systems, where St+1 = f(st, at) for actions at and dynamic. f, the dynamics impose constraints on which trajectories t are feasible. In stochastic systems where St+1 ~ p(St+1[St, at), we must also account for the dynamics distribution in Equation (1) as discussed by Ziebart et al. (2008). Prior work has addressed this using dynamic programming to exactly compute Z in small, discrete systems (Ziebart et al., 2008), or by using sampling tc estimate Z for large, continuous domains (Kalakrishnan et al., 2010; Boularias et al., 2011; Fint et al., 2016b). Since our state representation corresponds to a large vector of visual features, exac dynamic programming is infeasible. Sample-based approximation requires running a large numbe of trials to estimate Z and, as shown in recent work (Finn et al., 2016a), corresponds to a varian of generative adversarial networks (GANs), with all of the accompanying stability and optimizatior challenges. Furthermore, the corresponding model for the reward function is complex, making i prone to overfitting when only a small number of demonstrations is available.\nWhen faced with a difficult learning problem in extremely low-data regimes, a standard solution is to resort to simple, biased models, so as to minimize overfitting. We adopt precisely this approach in our work: instead of approximating the complex posterior distribution over trajectories under nonlinear dynamics, we use a simple biased model that affords efficient learning and minimizes overfitting. Specifically, we assume that all trajectories are dynamically feasible, and that the distri- bution over each activation at each time step is independent of all other activations and all other time steps. This corresponds to the IRL equivalent of a naive Bayes model: in the same way that naive Bayes uses an independence assumption to mitigate overfitting in high-dimensional feature spaces. we use independence between both time steps and features to learn from very small numbers of demonstrations. Under this assumption, the probability of a trajectory t factorizes according to\nT N T N 1 11 111 p(T) = p(Sit) = exp(R(Sit)) Lit t=1i=1 t=1i=1"}, {"section_index": "6", "section_name": "2.2 DISCOVERY OF INTERMEDIATE STEPS", "section_text": "The simple IRL model in the previous section can be used to acquire a single quadratic reward. function in terms of the visual features st. However, for complex multi-stage tasks, this model can be too coarse, making task learning slow and difficult. We therefore instead fit multiple quadratic reward functions, with one reward function per intermediate step or goal. These steps are discovered automatically in the first stage of our method, which is performed independently on each demonstra- tion. If multiple demonstrations are available, they are pooled together in the feature selection step discussed in the next section, and could in principle be combined at the segmentation stage as well,. though we found this to be unnecessary in our prototype. The intermediate steps model extends the. simple independent Gaussian model in the previous section by assuming that.\nT N 1 111 p(T) = exp(Riqt( Zit S t=1i=1\nT 1 p(T) = p(S1,..., ST) : R(st) exp Z t=1\nwhich corresponds to a reward function of the form Rt(st) = =1 R(s;t). We can then simply choose a form for R,(st) that can be normalized analytically, which in our case is a quadratic in Sit, such that exp(R,(sit))/Zt is a Gaussian distribution, and the original trajectory distribution. is a naive Bayes model. While this approximation is quite drastic, it yields an exceedingly simple learning rule: in the most basic version, we have only to fit the mean and variance of each feature distribution, and then use the log of the resulting Gaussian as the reward..\nwhere gt is the index of the goal or step corresponding to time step t. Learning then corresponds to identifying the boundaries of the steps in the demonstration, and fitting independent Gaussian feature distributions at each step. Note that this corresponds exactly to segmenting the demonstration such that the variance of each feature within each segment is minimized.\nn this work, we employ a simple recursive video segmentation algorithm as described in Alg ithm 1. Intuitively, this method breaks down a sequence in a way that each frame in a segment abstractly similar to each other frame in that segment. The number of segments is provided manuall n this approach, though it would be straightforward to also utilize standard model selection criter or choosing this number automatically. There exists a body of unsupervised video segmentatic nethods Yuan et al. (2007) which would likely enable a less constrained set of demonstrations e used. While this is an important avenue of future work, we show that our simple approach sufficient to demonstrate the efficacy of our method on a realistic set of demonstrations. We als nvestigate how to reduce the search space of similar feature patterns across videos in section 2. This would render discovery of video alignments tractable for an optimization method such as th one used in Joulin et al. (2014) for video co-localization.\nThe complexity of Algorithm 1 is O(nm) where n is the number of frames in a sequence and m the number of splits. Note that dynamic programming is not applicable to this algorithm because. each sub-problem, i.e. how to split a sequence after the ith frame, depends on the segmentation. chosen before the ith frame. We also experiment with a greedy binary version of this algorithm. (Algorithm 2 detailed in section A.1): first split the entire sequence in two, then recursively split each new segment in two. While not exactly minimizing the variance across all segments, it is. significantly more efficient (O(n? log m)) and yields qualitatively sensible results..\nAlgorithm 1 Recursive similarity maximization, where AverageStdO is a function that com-. putes the average standard deviation over a set of frames or over a set of values, Join() is a. function that joins values or lists together into a single list, n is the number of splits desired and min_size is the minimum size of a split..\nfunction SpLiT(video, start, end, n, min_size, prev_std = ) if n = 1 then return [], [AvERAGESTD(video[start : end])] end if min_std None min_std_list[] min_split for i start + min_size to end - ((n - 1) * min_size)) do. std1 [AvERAGESTD(video[start : i])] splits2, std2 SpLiT(video, i, end, n - 1, min_size, prev_std + std1) avg_std AvERAGESTD(JOIN(prev_std, std1, std2)) if min_std = None or avg_std < min_std then. min_std + avg_std min_std_list JoIN(std1, std2) min_split JoIN(i, splits2) end if end for return min_split, min_std_list. end function"}, {"section_index": "7", "section_name": "2.3 STEPS CLASSIFICATION", "section_text": "In this section we explore learning a steps classifier on top of the pre-trained deep model using regular linear classifier and a custom feature selection classifier.\nIntent understanding requires identifying highly discriminative features of a specific goal while re. maining invariant to unrelated variation (e.g. lighting, color, viewpoint). The relevant discriminativ features may be very diverse and more or less abstract, which motivates our intuition to tap into th activations of deep models at different depths. Deep models cover a large set of representations tha can be useful, from spatially dense and simple features in the lower layers (e.g. large collection o detected edges) to gradually more spatially sparse and abstract features (e.g. few object classes).\nWe train a simple linear layer which takes as input the same mid to high level activations used for steps discovery and outputs a score for each step. This linear layer is trained using logistic regression and the underlying deep model is not fine-tuned. Given the large input (1,453,824 units) and the low data regime (11 to 19 videos of 30 to 50 frames each), we hypothesize that this model should severely overfit to the training data and perform poorly in validation and testing. We test and discuss that hypothesis in Section 3.1.2.\nWe also hypothesize that there exists a small subset of mid to high-level features that are spars independent and can readily and compactly discriminate previously unseen inputs. We investigat that hypothesis using a simple feature selection method described in Appendix A.3. The existence o a small subset of discriminative features can be useful for reducing overfitting in low data regimes but more importantly can allow drastic reduction of the search space for the unsupervised step discovery. Indeed since each frame is described by millions of features, finding patterns of featur correlations across videos leads to a combinatorial explosion. However, the problem may becom tractable if there exists a low-dimensional subset of features that leads to reasonably accurate step classification. We test and discuss that hypothesis in Section 3.1.2."}, {"section_index": "8", "section_name": "3 EXPERIMENTS", "section_text": "In this section, we discuss our empirical evaluation, starting with an analysis of the learned reward functions in terms of both qualitative reward structure and quantitative segmentation accuracy. We then present results for a real-world validation of our method on robotic door opening.\nWe report results on two demonstrated tasks: door opening and liquid pouring. We collected about. a dozen training videos for each task using a smart phone. As an example, Fig. 2 shows the entire training set used for the pouring task..\nFigure 2: Entire training set for the pouring task (11 demonstrations)"}, {"section_index": "9", "section_name": "3.1.1 OUALITATIVE ANALYSIS", "section_text": "While a door opening sensor can be engineered using sensors hidden in the door, measuring pouring or container tilting would be quite complicated, would visually alter the scene, and is unrealistic for learning in the wild. Visual reward functions are therefore an excellent choice for complex physical phenomena such as liquid pouring. In Fig. 3, we present the combined reward functions for test\nIn order to use our learned perceptual reward functions in a complete skill learning system, we must. also choose a reinforcement learning algorithm and a policy representation. While in principle any. reinforcement learning algorithm could be suitable for this task, we chose a method that is efficient for evaluation on real-world robotic systems in order to validate our approach. The method we use. is based on the PI2 reinforcement learning algorithm (Theodorou et al., 2010). Our implementation,. which is discussed in more detail in Appendix A.4, uses a relatively simple linear-Gaussian parame- terization of the policy, which corresponds to a sequence of open-loop torque commands with fixed. linear feedback to correct for perturbations. This method also requires initialization from example demonstrations to learn complex manipulation tasks efficiently. A more complex neural network policy could also be used (Chebotar et al., 2016), and more sophisticated RL algorithms could also learn skills without demonstration initialization. However, since the main purpose of this component is to validate the learned reward functions, we used this simple approach to test our rewards quickly. and efficiently.\nvideos on the pouring task, and Fig. 10 shows the intermediate rewards for each sub-goal. We plo the predicted reward functions for both successful and failed task executions in Fig. 11. We observe. that for \"missed\" executions where the task is only partially performed, the intermediate steps are. correctly classified. Fig. 9 details qualitative results of unsupervised step segmentation for the door. opening and pouring tasks. For the door task, the 2-segments splits are often quite in line with what. one can expect, while a 3-segments split is less accurate. We also observe that the method is robust. to the presence or absence of the handle on the door, as well as its opening direction. We find that for the pouring task, the 4-segments split often yields the most sensible break down. It is interesting. to note that the 2-segment split usually occurs when the glass is about half full.."}, {"section_index": "10", "section_name": "Failure Cases", "section_text": "The intermediate reward function for the door opening task which corresponds to a human han manipulating the door handle seems rather noisy or wrong in 10b, 10c and 10e (\"action1' on th y-axis of the plots).The reward function in 11f remains flat while liquid is being poured into the glass. The liquid being somewhat transparent, we suspect that it looks too similar to the transparen glass for the function to fire.\n(a) (b) c\nFigure 3: Examples of 'pouring'' reward functions. We show here a few successful examples, see Fig. 11 for results on the entire test set. In 3a we observe a continuous and incremental reward as the task progresses and saturating as it is completed. 3b increases as the bottle appears but successfully detects that the task is not completed, while in 3c it successfully detects that the action is already completed from the start"}, {"section_index": "11", "section_name": "3.1.2 OUANTITATIVE ANALYSIS", "section_text": "We evaluate the quantitative accuracy of the unsupervised steps discovery in Table 1, while Table 2 presents quantitative generalization results for the learned reward on a test video of each task. For each video, ground truth intermediate steps were provided by human supervision for the purpose of evaluation. While this ground truth is subjective, since each task can be broken down in multiple ways, it is consistent for the simple tasks in our experiments. We use the Jaccard similarity mea sure (intersection over union) to indicate how much a detected step overlaps with its corresponding ground truth.\nTable 1: Unsupervised steps discovery accuracy (Jaccard overlap on training sets) versus the ordered random steps baseline.\nIn Table 1, we compare our method against a random baseline. Because we assume the same step or. der in all demonstrations, we also order the random steps in time to provide a fair baseline. Note tha. the random baseline performs fairly well because the steps are distributed somewhat uniformly i1. time. Should the steps be much less temporally uniform, the random baseline would be expected t perform very poorly, while our method should maintain similar performance. We compare splitting. between 2 and 3 steps and find that, for both tasks, 2 steps are easier to discover, probably becaus. these tasks exhibit one strong visual change each while the other steps are more subtle. Note that ou. unsupervised segmentation only works when full sequences are available while our learned rewarc. functions can be used in real-time without accessing future frames. Hence in these experiments w.\ndataset method 2 steps 3 steps (training) step 1 step 2 average step 1 step 2 step 3 average door ordered random steps 59.4% 45.6% 52.5% 48.0% 58.1% 60.1% 55.4% unsupervised steps 84.0% 68.1 % 76.1 % 57.6% 75.1% 68.1 % 66.9% pouring ordered random steps 65.2% 66.6% 65.9% 46.2% 46.3% 66.3 % 52.9% unsupervised steps 92.3% 90.5% 91.6% 79.7% 48.0% 48.6% 58.8%\nevaluate the unsupervised segmentation on the training set only and evaluate the reward function on the test set.\nTable 2: Reward functions accu steps (Jaccard overlap on test sets)\nIn Table 2, we evaluate the reward functions individually for each step on the test set. For that. purpose, we binarize the reward function using a threshold of O.5. The random baseline simply. outputs true or false at each timestep. We observe that the learned feature selection and linear. classifier functions outperform the baseline by about a factor of 2. It is not clear exactly what the. minimum level of accuracy is required to successfully learn to perform these tasks, but we show in section 3.2.2 that the reward accuracy on the door task is sufficient to reach 100% success rate with. a real robot. Individual steps accuracy details can be found in Table 3..\nSurprisingly, the linear classifier performs well and does not appear to overfit on our relatively small. training set. Although the feature selection algorithm is rather close to the linear classifier compared. to the baseline, using feature selection to avoid avoiding is not neccesary. However the idea that a small subset of features (32 in this case) can lead to reasonable classification accuracy is verified. and an important piece of information for drastically reducing the search space for future work in. unsupervised steps discovery. Additionally, we show in Fig. 4 that the feature selection approach. works well when the number of features n is in the region [32, 64] but collapses to 0% accuracy. when n > 8192."}, {"section_index": "12", "section_name": "3.2 REAL-WORLD ROBOTIC DOOR OPENING", "section_text": "In this section, we aim to answer the question of whether our previously visualized reward function can be used to learn a real-world robotic motion skill. We experiment on a door opening skill, where we adapt a demonstrated door opening to a novel configuration, such as different position or orientation of the door. Following the experimental protocol in prior work (Chebotar et al., 2016), we adapt an imperfect kinesthetic demonstration which we ensure succeeds at least occasionally (about 10% of the time). These demonstrations consist only of robot poses, and do not include images. We then use a variety of different video demonstrations, which contain images but not robot poses, to learn the reward function. These videos include demonstrations with other doors, and even demonstrations provided by a human using their own body, rather than through kinesthetic teaching with the robot.\ndataset (testing) classification method 2 steps average 3 steps average door random baseline. 33.6% 1.6 25.5% 1.6 feature selection 72.4% 0.0 52.9% 0.0 linear classifier 75.0% 5.5 53.6% 4.7 pouring random baseline. 31.1% 3.4 25.1% 0.1 feature selection 65.4% 0.0 40.0% 0.0 linear classifier 69.2% 2.0 49.6% 8.0\n80.0% 60.0% 40.0% 20.0% 0.0% 100 1000 10000 100000 1000000 #features\nFigure 4: Feature selection classification accuracy on the pouring validation set for 2-steps classification. By varying the number of features n selected, we show that the method yields good results in the region n 32, 64[, but collapses to 0% accuracy starting at n = 8192\nr3d32 r3d32\nFigure 5: Robot arm setup. Note that our method does not make use of the sensor on the back handle of the door, but it is used in our comparison to train a baseline method with the ground truth reward\nFigure 5 shows the experimental setup. We use a 7-DoF robotic arm with a two-finger gripper, and. a camera placed above the shoulder, which provides monocular RGB images. For our baseline PI2. policy, we closely follow the setup of Chebotar et al. (2016) which uses an IMU sensor in the door handle to provide both a cost and feedback as part of the state of the controller. In contrast, in our approach we remove this sensor both from the state representation provided to PI2 and in our reward. replace the target IMU state with the output of a deep neural network..\na) b) (c) (d) InA e\n(a) (b) (d)\nFigure 6: Rewards from human demonstration only. Here we show the rewards produced when trained or. humans only (see Fig. 12). In 6a, we show the reward on a human test video. In 6b, we show what the rewar produces when the human hands misses opening the door. In 6c, we show the reward successfully saturates. when the robot opens the door even though it has not seen a robot arm before. Similarly in 6d and 6e we shov. it still works with some amount of variation of the door which was not seen during training (white door anc. black handle, blue door, rotations of the door).."}, {"section_index": "13", "section_name": "3.2.1 DATA", "section_text": "We experiment with a range of different demonstrations from which we derive our reward function. varying both the source demo (human vs robotic), the number of subgoals we extract, and the ap- pearance of the door. We record monocular RGB images on a camera placed above the shoulder of the arm. The door is cropped from the images, and then the resulting image is re-sized such that. the shortest side is 299 dimensional with preserved aspect ratio. The input into our convolutional. feature extractor Szegedy et al. (2015) is the 299x299 center crop."}, {"section_index": "14", "section_name": "3.2.2 OUALITATIVE ANALYSIS", "section_text": "We evaluate our reward functions qualitatively by plotting our perceptual reward functions below the demonstrations with a variety of door types and demonstrators (e.g robot or human). As car be seen in Fig. 6 and in real experiments Fig. 7, we show that the reward functions are useful to a robotic arm while only showing human demonstrations as depicted in Fig. 12. Moreover we exhibi robustness variations in appearance.\n1.0 0.8 0.6 0.4 0.2 Method Baseline PI-Squared Our method (2 sub-goals, robot demonstration) Our method (5 sub-goals, robot demonstration) Our method (4 sub-goals, human demonstrations only,slight appearancevariations) 0.0 0 2 4 6 8 10 iteration number\nFigure 7: Door opening success rate at each iteration of learning on the real robot. The PI2 baselin. method uses a ground truth reward function obtained by instrumenting the door. Note that rewards learned b. our method, even from videos of humans or different doors, learn comparably or faster when compared to th ground truth reward."}, {"section_index": "15", "section_name": "3.2.3 OUANTITATIVE ANALYSIS", "section_text": "In comparing the success rate of visual reward versus a baseline PI2 method that uses the ground truth reward function obtained by instrumenting the door with an IMU. We run PI2 for 11 iterations with 10 sampled trajectories at each iteration. As can be seen in Fig. 7, we obtain similar conver. gence speeds to our baseline model, with our method also able to open the door consistently. Since our local policy is able to obtain high reward candidate trajectories, this is strong evidence that a perceptual reward could be used to train a global in same manner as Chebotar et al. (2016)."}, {"section_index": "16", "section_name": "4 CONCLUSION", "section_text": "In this paper, we present a method for automatically identifying important intermediate goal givei. a few visual demonstrations of a task. By leveraging the general features learned from pre-traine deep models, we propose a method for rapidly learning an incremental reward function from humar. demonstrations which we successfully demonstrate on a real robotic learning task..\nC robotc learnng task. We show that pre-trained models are general enough to be used without retraining. We also show. there exists a small subset of pre-trained features that are highly discriminative even for previously unseen scenes and which can be used to reduce the search space for future work in unsupervised steps discovery.\nIn this paper, we present a method for automatically identifying important intermediate goal given. a few visual demonstrations of a task. By leveraging the general features learned from pre-trained. deep models, we propose a method for rapidly learning an incremental reward function from human. demonstrations which we successfully demonstrate on a real robotic learning task.. We show that pre-trained models are general enough to be used without retraining. We also show. there exists a small subset of pre-trained features that are highly discriminative even for previously. unseen scenes and which can be used to reduce the search space for future work in unsupervised steps discovery. Another compelling direction for future work is to explore how reward learning algorithms can be combined with robotic lifelong learning. One of the biggest barriers for lifelong learning in the. real world is the ability of an agent to obtain reward supervision, without which no learning is. possible. Continuous learning using unsupervised rewards promises to substantially increase the. variety and diversity of experience that is available for robotic reinforcement learning, resulting in.\nAnother compelling direction for future work is to explore how reward learning algorithms can be. combined with robotic lifelong learning. One of the biggest barriers for lifelong learning in the real world is the ability of an agent to obtain reward supervision, without which no learning is. possible. Continuous learning using unsupervised rewards promises to substantially increase the. variety and diversity of experience that is available for robotic reinforcement learning, resulting in. more powerful, robust, and general robotic skills.."}, {"section_index": "17", "section_name": "REFERENCES", "section_text": "Yevgen Chebotar, Mrinal Kalakrishnan, Ali Yahya, Adrian Li, Stefan Schaal, and Sergey Levine Path integral guided policy search. arXiv preprint arXiv:1610.00529, 2016..\nJan Peters, Katharina Mulling, and Yasemin Altuin. Relative entropy policy search. In AAAI Con\nNathan D Ratliff, J Andrew Bagnell, and Martin A Zinkevich. Maximum margin planning. In Proceedings of the 23rd international conference on Machine learning.. . 729-736. ACM. 2006\nWe would like to thank Vincent Vanhoucke for helpful discussions and feedback. We would also like to thank Mrinal Kalakrishnan and Ali Yahya for indispensable guidance throughout this project\nPieter Abbeel and Andrew Y Ng. Apprenticeship learning via inverse reinforcement learning. In. Proceedings of the twenty-first international conference on Machine learning. pp. 1. ACM. 2004\nAbdeslam Boularias, Jens Kober, and Jan Peters. Relative entropy inverse reinforcement learning 2011.\nChelsea Finn, Xin Yu Tan, Yan Duan, Trevor Darrell, Sergey Levine, and Pieter Abbeel. Deep spatial autoencoders for visuomotor learning. arXiv preprint arXiv:1509.06113, 2015.\nMarkus Wulfmeier, Dominic Zeng Wang, and Ingmar Posner. Watch This: Scalable Cost-Functio Learning for Path Planning in Urban Environments . In IEEE/RsJ International Conference o Intelligent Robots and Svstems (IROS). 2016. arxiy preprint: http://arxiv.org/abs/1607.02329\nBrian D Ziebart, Andrew L Maas, J Andrew Bagnell, and Anind K Dey. Maximum entropy inverse reinforcement learning. In AAAI, pp. 1433-1438, 2008"}, {"section_index": "18", "section_name": "A ALGORITHMS DETAILS", "section_text": "Algorithm 2 Greedy and binary algorithm similar to and utilizing Algorithm 1, where AverageStdO is a function that computes the average standard deviation over a set of frames or over a set of values, Join( is a function that joins values or lists together into a single list, n is the number of splits desired and min_size is the minimum size of a split."}, {"section_index": "19", "section_name": "A.2 COMBINING INTERMEDIATE REWARDS", "section_text": "From the two previous sections, we obtain one reward function per intermediate step discoverec by the unsupervised algorithm. These need to be combined so that the RL algorithm uses a singl reward function which partially rewards intermediate steps but most rewards the final one. The initial step is ignored as it is assumed to be the resting starting state in the demonstrations. We op for the maximum range of each reward be twice the maximum range of its preceding reward, anc summing them as follow:\nwhere n is the number of intermediate rewards detected and a an activations vector. An example this combination is shown in Fig. 8.."}, {"section_index": "20", "section_name": "A.3 FEATURE SELECTION ALGORITHM", "section_text": "Here we describe the feature selection algorithm we use to investigate the presence of a small subse1 of discriminative features in mid to high level layers of a pre-trained deep network. To select the most discriminative features, we use a simple scoring heuristic. Each feature i is first normalized by subtracting the mean and dividing by the standard deviation of all training sequences. We then rank them for each sub-goal according to their distance z; to the average statistics of the sets of positive and negative frames for a given goal:\nthe number of splits desired and min_size is the minimum size of a split function BiNARYSpLIT(video, start, end, n, min_size, prev_std = [) if n = 1 then return [], ] end if splits0, std0 SpLIT(video, start, end, 2, min_size) if n = 2 then return splits0, std0 end if splits1, std1 BINARYSpLIT(video, start, splits0[0], CeIL(n/2), min_size) splits2, std2 BINARYSpLIT(video, splits0[0] + 1, end, FLOOR(n/2), min_size) all_splits = all_std = if splits1 # then JOIN(all_splits, splits1) JoiN(all_std, std1) else Jo1N(all_std, std0[0]) end if if splits0 = [] then Jo1N(all_splits, splits0[0]) end if if splits2 = then JoIn(all_splits, splits2) JO1N(all_std, std2) else Jo1N(all_std, std0[1]) end if return all_splits, all_std end function\nn R(a) = Ri(a) *2(i-1) i=2\ni = Q +\nFigure 8: Combining intermediate rewards into a single reward function. From top to bottom, we shov the combined reward function (with range [0,2]) followed by the reward function of each individual steps (witl. range [0,1l). The first step reward corresponds to the initial resting state of the demonstration and is ignored ir the reward function. The second step corresponds to the pouring action and the third step corresponds to the. glass full of liquid.\nwhere and o are the mean and standard deviation of all \"positive\"' frames and the and ,. of all \"negative' frames (the frames that do not contain the sub-goal). Only the top-M features are retained to form the reward function RgO for the sub-goal g, which is given by the log-probability. of an independent Gaussian distribution over the relevant features:.\nwhere i; indexes the top-M selected features. We empirically choose a = 5.0 and M = 32 for ou subsequent experiments. At test time, we do not know when the system transitions from one goa to another, so instead of time-indexing the goals, we instead combine all of the goals into a singl time-invariant reward function, where later steps yield higher reward than earlier steps, as describec in Appendix A.2.\nTable 3: Reward functions accuracy by steps (Jaccard overlap on test sets)"}, {"section_index": "21", "section_name": "A.4 PI2 REINFORCEMENT LEARNING ALGORITHM", "section_text": "We chose the PI2 reinforcement learning algorithm (Theodorou et al., 2010) for our experiments,. with the particular implementation of the method based on a recently proposed deep reinforcement. learning variant (Chebotar et al., 2016). Since our aim is mainly to validate that our learned reward functions capture the goals of the task well enough for learning, we employ a relatively simple. linear-Gaussian parameterization of the policy, which corresponds to a sequence of open-loop torque. commands with fixed linear feedback to correct for perturbations, as in the work of Chebotar et al.. (2016). This policy has the form (ut|xt) = N(Kxt + kt, t), where K is a fixed stabilizing. feedback matrix, and kt is a learned control. In this case, the state xt corresponds to the joint. angles and angular velocities of a robot, and ut corresponds to the joint torques. Since the reward. function is evaluated from camera images, we assume that the image is a (potentially stochastic). consequence of the robot's state. so that we can evaluate the state reward r(x) by taking the image.\nM 1 n\ndataset method steps (testing) step 1 step 2 step 3 average door random baseline 40.8% 1.0 26.3% 4.1 33.6% 1.6 2-steps feature selection 85.1% 0.0 59.7% 0.0 72.4% 0.0 linear classifier 79.7% 6.0 70.4% 5.0 75.0% 5.5 door random baseline 20.8% 1.1 31.8%1.6 23.8% 2.3 25.5% 1.6 3-steps feature selection 56.9% 0.0 47.7% 0.0 54.1% 0.0 52.9% 0.0 linear classifier 46%6.9 47.5% 4.2 67.2% 3.3 53.6% 4.7 pouring random baseline 39.2% 2.9 22.9% 3.9 31.1%3.4 2-steps feature selection 76.2% 0.0 54.6% 0.0 65.4% 0.0 linear classifier 78.2% 2.4 60.2% 1.7 69.2% 2.0 pouring random baseline 22.5% 0.6 38.8% 0.8 13.9% 0.1 25.1% 0.1 3-steps feature selection 32.9% 0.0 55.2% 0.0 32.2% 0.0 40.0% 0.0 linear classifier 72.5% 10.5 37.2% 11.0 39.1% 6.8 49.6% 8.0\nI observed at time t, and computing the corresponding activations at. Overloading the notation we can write at = f(I(xt)), where f is the network we use for visual features. Then, we have. r(xt) = R(f(It(xt)))\nT T L u exp exp Bt Kt Bt t'=t 2 t'=t\nwhere the temperature t is chosen to bound the KL-divergence between the new policy (ut[xt). and the previous policy (u,|xt), such that DkL((utxt)(u;|xt)) < e for a step size epsilon Further details and a complete derivation are provided in prior work Theodorou et al. (2010); Peters. et al. (2010); Chebotar et al. (2016)\nThe PI2 algorithm is a local policy search method that performs best when provided with demon. strations to bootstrap the policy. In our experiments, we use this method together with our learne reward functions to learn a door opening skill with a real physical robot, as discussed in Section 3.2. Demonstration are provided with kinesthetic teaching, which results in a sequence of reference step. Xt, and initial controls k are given by k = -Kt, such that the mean of the initial controller i. Kt(xt - xt), corresponding to a trajectory-following initialization. This initial controller is rarel. successful consistently, but the occasional successes it achieves provide a learning signal to the al. gorithm. The use of demonstrations enables PI2 to be used to quickly and efficiently learn comple. robotic manipulation skills.\nAlthough this particular RL algorithm requires demonstrations to begin learning, it can still provide a useful starting point for real-world learning with a real robotic system. As shown by Chebotar et al. (2016), the initial set of demonstrations can be expanded into a generalizable policy by iteratively growing' the effective region where the policy succeeds. For example, if the robot is provided with a demonstration of opening a door in one position, additional learning can expand the policy to succeed in nearby positions, and the application of a suitable curriculum can grow the region of door poses in which the policy succeeds progressively. However, as with all RL algorithms, this process requires knowledge of the reward function. Using the method described in this paper, we can learn such a reward function from either the initial demonstrations or even from other demonstration videos provided by a human. Armed with this learned reward function, the robot could continue to improve its policy through real-world experience, iteratively increasing its region of competence through lifelong learning.\nThe PI2 algorithm is an episodic policy improvement algorithm that uses the reward r(xt) to itera tively improve the policy. The trust-region variant of PI2 that we use Chebotar et al. (2016), which. is also similar to the REPS algorithm (Peters et al., 2010), updates the policy at iteration n by sam (i)\nFigure 9: Qualitative examples of unsupervised discovery of steps for door and pouring tasks in training videos. For each video, we show the detected splits when splitting in 2, 3 or 4 segments. Each segment is. delimited by a different value on the vertical axis of the curves..\na) b c) (d) e) *\nFigure 10: Qualitative examples of reward functions for the door task in testing videos. These plots show. the individual sub-goal rewards for either 2 or 3 goals splits. The \"open\"' or 'closed' door reward functions are. firing quite reliably in all plots, the \"hand on handle\"' step however can be a weaker and noisier signal as seen. in 10b and 10c. or incorrect as shown in 10e. 10d demonstrates how a 'missed'' action is correctly recognized\na (b) (d) e 21558 g (h)\nFigure 11: Entire testing set of 'pouring'' reward functions. This testing set is designed to be more chal lenging than the training set by including ambiguous cases such as pouring into an already full glass (11i and 11j) or pouring with a closed bottle (11g and 11h). Despite the ambiguous inputs, the reward functions do produce reasonably low or high reward based on how full the glass is. 11a, 11b, 11b and 11d are not strictly monotonically increasing but do overall demonstrate a reasonable progression as the pouring is executed to a saturated maximum reward when the glass is full. 11e also correctly trends upwards but starts with a high reward with an empty glass. 11f is a failure case where the somewhat transparent liquid is not detected.\nFigure 12: Entire training set of human demonstrations"}] |
rJY3vK9eg | [{"section_index": "0", "section_name": "NEURAL COMBINATORIAL OPTIMIZATION WITH I REINFORCEMENT LEARNING", "section_text": "Irwan Bello*, Hieu Pham*, Quoc V. Le, Mohammad Norouzi, Samy Bengi Google Brain\n{ibello,hyhieu, qyl,mnorouzi,benc ogle.com Y\nThis paper presents a framework to tackle combinatorial optimization problems using neural networks and reinforcement learning. We focus on the traveling salesman problem (TSP) and train a recurrent neural network that, given a set of city coordinates, predicts a distribution over different city permutations. Using negative tour length as the reward signal, we optimize the parameters of the re- current neural network using a policy gradient method. We compare learning the network parameters on a set of training graphs against learning them on individual test graphs. Without much engineering and heuristic designing, Neural Combina- torial Optimization achieves close to optimal results on 2D Euclidean graphs with up to 100 nodes. Applied to the KnapSack, another NP-hard problem, the same method obtains optimal solutions for instances with up to 200 items. These re- sults, albeit still far from state-of-the-art, give insights into how neural networks can be used as a general tool for tackling combinatorial optimization problems."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Combinatorial optimization is a fundamental problem in computer science. A canonical example. is the traveling salesman problem (TSP), where given a graph, one needs to search the space of permutations to find an optimal sequence of nodes with minimal total edge weights (tour length) The TSP and its variants have myriad applications in planning, manufacturing, genetics, etc. (see. (Applegate et al.2011) for an overview).\nFinding the optimal TSP solution is NP-hard, even in the two-dimensional Euclidean case (Papadim. itriou1977), where the nodes are 2D points and edge weights are Euclidean distances between pairs of points. In practice, TSP solvers rely on handcrafted heuristics that guide their search procedures to find competitive (and in many cases optimal) tours efficiently. Even though these heuristics work. well on TSP, once the problem statement changes slightly, they need to be revised. In contrast.. machine learning methods have the potential to be applicable across many optimization tasks by. automatically discovering their own heuristics based on the training data, thus requiring less hand engineering than solvers that are optimized for one task only..\nWhile most successful machine learning techniques fall into the family of supervised learning, wher a mapping from training inputs to outputs is learned, supervised learning is not applicable to mos combinatorial optimization problems because one does not have access to optimal labels. Howeve one can compare the quality of a set of solutions using a verifier, and provide some reward feedback to a learning algorithm. Hence, we follow the reinforcement learning (RL) paradigm to tackl combinatorial optimization. We empirically demonstrate that, even when using optimal solutions a labeled data to optimize a supervised mapping, the generalization is rather poor compared to an RI agent that explores different tours and observes their corresponding rewards.\nWe propose Neural Combinatorial Optimization, a framework to tackle combinatorial optimization. problems using reinforcement learning and neural networks. We consider two approaches based on. policy gradients (Williams 1992). The first approach, called RL pretraining, uses a training set to. optimize a recurrent neural network (RNN) that parameterizes a stochastic policy over solutions. using the expected reward as objective. At test time, the policy is fixed, and one performs inference"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Planar TSP50 Planar TSP100 1.10 1.10 Our best method Our best method Google OR tools (generic local search) Google OR tools (generic local search) Local Search (LK-H) = Exact (Concorde) Local Search (LK-H) = Exact (Concorde) 1.05 1.05 1.00 1.00 0 25 50 75 100 0 25 50 75 100 Percentile Percentile\nFigure 1: Tour length ratios of LK-H (Helsgaun 2000) local search and our best method (RI pretraining-Active Search) against optimality, guaranteed by Concorde (Applegate et al.[ 2006) Generic local search, obtained via Googles vehicle routing problem solver (Google2016), applies a set of heuristics starting from the (Christofides![1976) solution. Note that our method is five orders of magnitude slower than LK-H and Concorde.\nby greedy decoding or sampling. The second approach, called active search, involves no pretraining It starts from a random policy and iteratively optimizes the RNN parameters on a single test instance. again using the expected reward objective, while keeping track of the best solution sampled during the search. We find that combining RL pretraining and active search works best in practice.\nOn 2D Euclidean graphs with up to 100 nodes, Neural Combinatorial Optimization significantly outperforms the supervised learning approach to the TSP (Vinyals et al.]2015b) and obtains close to optimal results when allowed more computation time (see Figure[1). We illustrate the flexibility of the method by also applying it to the KnapSack problem, for which we get optimal results fo instances with up to 200 items. Our results, while still inferior to the state-of-the-art in many dimen sions (such as speed, scale and performance), give insights into how neural networks can be used as a general tool for tackling combinatorial optimization problems, especially those that are difficult tc design heuristics for.\nThe best known exact dynamic programming algorithm for TSP has a complexity of O(2nn2) making it infeasible to scale up to large instances, say with 40 points. Nevertheless, state of the art TSP solvers. thanks to carefully handcrafted heuristics that describe how to navigate the space of feasible solutions in an efficient manner, can solve symmetric TSP instances with thousands o1 nodes. Concorde (Applegate et al.]2006), widely accepted as one of the best exact TSP solvers makes use of cutting plane algorithms (Dantzig et al.]1954] Padberg & Rinaldi]1990f Applegate et al.|2003), iteratively solving linear programming relaxations of the TSP, in conjunction with a branch-and-bound approach that prunes parts of the search space that provably will not contair an optimal solution. Similarly, the Lin-Kernighan-Helsgaun heuristic (Helsgaun2o0o), inspired from the Lin-Kernighan heuristic (Lin & Kernighan] 1973), is a state of the art approximate search heuristic for the symmetric TSP and has been shown to solve instances with hundreds of nodes to Optimality.\nMore generic solvers, such as Google's vehicle routing problem solver (Google|2016) that tackles a superset of the TSP, typically rely on a combination of local search algorithms and metaheuristics Local search algorithms apply a specified set of local move operators on candidate solutions, basec\nThe Traveling Salesman Problem is a well studied combinatorial optimization problem and many. exact or approximate algorithms have been proposed for both Euclidean and non-Euclidean graphs Christofides (1976) proposes a heuristic algorithm that involves computing a minimum-spanning. tree and a minimum-weight perfect matching. The algorithm has polynomial running time anc returns solutions that are guaranteed to be within a factor of 1.5 to optimality in the metric instance of the TSP.\nThe difficulty in applying existing search heuristics to newly encountered problems - or even nev instances of a similar problem - is a well-known challenge that stems from the No Free Lunch theo rem (Wolpert & Macready|1997). Because all search algorithms have the same performance whe averaged over all problems, one must appropriately rely on a prior over problems when selecting search algorithm to guarantee performance. This challenge has fostered interest in raising the leve of generality at which optimization systems operate (Burke et al.f2003) and is the underlying moti vation behind hyper-heuristics, defined as 'search method[s] or learning mechanism[s] for selectin or generating heuristics to solve computation search problems\"'. Hyper-heuristics aim to be easie to use than problem specific methods by partially abstracting away the knowledge intensive proces of selecting heuristics given a combinatorial problem and have been shown to successfully combin human-defined heuristics in superior ways across many tasks (see (Burke et al.|2013) for a survey However, hyper-heuristics operate on the search space of heuristics, rather than the search space o solutions, therefore still initially relying on human created heuristics.\nThe application of neural networks to combinatorial optimization has a distinguished history, where the majority of research focuses on the Traveling Salesman Problem (Smith]1999). One of th earliest proposals is the use of Hopfield networks (Hopfield & Tank]|1985) for the TSP. The author modify the network's energy function to make it equivalent to TSP objective and use Lagrang multipliers to penalize the violations of the problem's constraints. A limitation of this approach i that it is sensitive to hyperparameters and parameter initialization as analyzed by (Wilson & Pawley 1988). Overcoming this limitation is central to the subsequent work in the field, especially by (Aiye et al.]1990 Gee1993. Parallel to the development of Hopfield networks is the work on using deformable template models to solve TSP. Perhaps most prominent is the invention of Elastic Nets as a means to solve TSP (Durbin||1987), and the application of Self Organizing Map to TSP (Fort||1988 Angeniol et al.[1988 Kohonen1990). Addressing the limitations of deformable template model is central to the following work in this area (Burke]1994) Favata & Walker 1991 Vakhutinsk & Golden1995). Even though these neural networks have many appealing properties, they ar still limited as research work. When being carefully benchmarked, they have not yielded satisfyin. results compared to algorithmic methods (Sarwar & Bhatti]2012} La Maire & Mladenov!2012) Perhaps due to the negative results, this research direction is largely overlooked since the turn of the century.\nMotivated by the recent advancements in sequence-to-sequence learning (Sutskever et al.] 2014) neural networks are again the subject of study for optimization in various domains (Yutian et al.. 2016), including discrete ones (Zoph & Le]2016). In particular, the TSP is revisited in the intro. duction of Pointer Networks (Vinyals et al.|2015b), where a recurrent network with non-parametric softmaxes is trained in a supervised manner to predict the sequence of visited cities. Despite archite-. cural improvements, their models were trained using supervised signals given by an approximate. solver."}, {"section_index": "3", "section_name": "3 NEURAL NETWORK ARCHITECTURE FOR TSP", "section_text": "We focus on the 2D Euclidean TSP in this paper. Given an input graph, represented as a sequence of n cities in a two dimensional space s = {x,}3=1 where each x; E R2, we are concerned with finding a permutation of the points , termed a tour, that visits each city once and has the minimum total length. We define the length of a tour defined by a permutation as\nwhere denotes l, norm\nWe aim to learn the parameters of a stochastic policy p( s) that given an input set of points s assigns high probabilities to short tours and low probabilities to long tours. Our neural network\non hand-engineered heuristics such as 2-opt (Johnson||1990), to navigate from solution to solution in the search space. A metaheuristic is then applied to propose uphill moves and escape local optima A popular choice of metaheuristic for the TSP and its variants is guided local search (Voudouris & Tsang 1999), which moves out of a local minimum by penalizing particular solution features that it considers should not occur in a good solution.\nn-1 L(|s)=|xn(n)-Xn(1)||+|Xm(i)-X(i+1)| i=1\nFigure 2: A pointer network architecture introduced by (Vinyals et al. 2015b)\narchitecture uses the chain rule to factorize the probability of a tour as\nn p(|s)=][p(n(i)|n(<i),s), i=1\nWe are inspired by previous work (Sutskever et al.2014) that makes use of the same factorizatio. based on the chain rule to address sequence to sequence problems like machine translation. On can use a vanilla sequence to sequence model to address the TSP where the output vocabulary i {1, 2, ...,n}. However, there are two major issues with this approach: (1) networks trained in thi fashion cannot generalize to inputs with more than n cities. (2) one needs to have access to ground truth output permutations to optimize the parameters with conditional log-likelihood. We addres. both isssues in this paper.\nFor generalization beyond a pre-specified graph size, we follow the approach of (Vinyals et al. 2015b), which makes use of a set of non-parameteric softmax modules, resembling the attention mechanism from (Bahdanau et al.[2015). This approach, named pointer network, allows the model to effectively point to a specific position in the input sequence rather than predicting an index value from a fixed-size vocabulary. We employ the pointer network architecture, depicted in Figure2l as our policy model to parameterize p( s)."}, {"section_index": "4", "section_name": "3.1 ARCHITECTURE DETAILS", "section_text": "Our pointer network comprises two recurrent neural network (RNN) modules, encoder and decoder both of which consist of Long Short-Term Memory (LSTM) cells (Hochreiter & Schmidhuber 1997). The encoder network reads the input sequence s, one city at a time, and transforms it into a. sequence of latent memory states {enci}-1 where enc; E Rd. The input to the encoder network at. time step i is a d-dimensional embedding of a 2D point x,, which is obtained via a linear transforma-. tion of x; shared across all input steps. The decoder network also maintains its latent memory states. {dec,}-1 where dec; E Rd and, at each step i, uses a pointing mechanism to produce a distribution. over the next city to visit in the tour. Once the next city is selected, it is passed as the input to the next decoder step. The input of the first decoder step (denoted by (g) in Figure[2) is a d-dimensional. vector treated as a trainable parameter of our neural network..\nVinyals et al.(2015a) also suggest including some additional computation steps, named glimpses to aggregate the contributions of different parts of the input sequence, very much like (Bahdanau et al.2015). We discuss this approach in details in AppendixA.1 In our experiments, we find that. utilizing one glimpse in the pointing mechanism yields performance gains at an insignificant cost latency.\nX1 X2 X3 X4 X5 X4 X5 X1 X2\nOur attention function, formally defined in Appendix[A.1 takes as input a query vector q = dec, E Rd and a set of reference vectors ref = {enc1,..., enck} where enc; E Rd, and predicts a distri- bution A(ref, q) over the set of k references. This probability distribution represents the degree to which the model is pointing to reference r; upon seeing query q.\nAlgorithm 1 Actor-critic training\n1: procedure TRAIN(training set S, number of training steps T, batch size B) 2: Initialize pointer network params 0. 3: Initialize critic network params 0u. 4: for t = 1 to T do 5: Si ~ SAMPLEINPUT(S) for i E {1,..., B} 6: ; ~ SAMPLESOLUTION(pe(.|si)) for i E {1,..., B} 7: bibe,(si) fori E {1,..., B} go + 1(L(rq|si)- b;)Vo l0g po(Ti|si) 8: Lu$|6 -L()| 9: 10: 0 ADAM(0,ge) 11: 0v ADAM(0v,Ve,Lu) 12: end for 13: return 0 14: end procedure\nCOL St ~ SAMPLEINPUT(S) for i E {1,..., B} i ~ SAMPLESOLUTION(pe(.[si)) fori E{1,..., B bi<be,(si) foriE {1,...,B} o 1i=1(L(q|si)-bi)Ve logPe(ri|si) LyB=1|biL(i)|l2 0 ADAM(0,ge) 0 ADAM(0v,Ve,Lv"}, {"section_index": "5", "section_name": "OPTIMIZATION WITH POLICY GRADIENTS", "section_text": "Vinyals et al.[(2015b) proposes training a pointer network using a supervised loss function compris ing conditional log-likelihood, which factors into a cross entropy objective between the network's. output probabilities and the targets provided by a TSP solver. Learning from examples in such a way is undesirable for NP-hard problems because (1) the performance of the model is tied to the quality of the supervised labels, (2) getting high-quality labeled data is expensive and may be infeasible for. new problem statements, (3) one cares about finding a competitive solution more than replicating. the results of another algorithm.\nBy contrast, we believe Reinforcement Learning (RL) provides an appropriate paradigm for training neural networks for combinatorial optimization, especially because these problems have relatively simple reward mechanisms that could be even used at test time. We hence propose to use model-free policy-based Reinforcement Learning to optimize the parameters of a pointer network denoted 0. Our training objective is the expected tour length which, given an input graph s, is defined as\nJ(0|s) =En~pe(|s) L(|s) .\nDuring training, our graphs are drawn from a distribution S, and the total training objective involve sampling from the distribution of graphs, i.e. J(0) = IEs~s J(0\nVeJ(0|s) = En~pe(|s) L(|s) - b(s))Ve logpe(\nwhere b(s) denotes a baseline function that does not depend on and estimates the expected tour length to reduce the variance of the gradients.\nB 1 VeJ(0) ~ L(i|si)-b(Si) Velogpe(isi) B i=1\nA simple and popular choice of the baseline b(s) is an exponential moving average of the rewards obtained by the network over time to account for the fact that the policy improves with training. While this choice of baseline proved sufficient to improve upon the Christofides algorithm, it suffers. from not being able to differentiate between different input graphs. In particular, the optimal tour. * for a difficult graph s may be still discouraged if L(*[s) > b because b is shared across all. instances in the batch.\nUsing a parametric baseline to estimate the expected tour length E~pe(|s)L( | s) typically im- proves learning. Therefore, we introduce an auxiliary network, called a critic and parameterized\nWe resort to policy gradient methods and stochastic gradient descent to optimize the parameters The gradient of (3) is formulated using the well-known REINFORCE algorithm (Williams1992):\nBy drawing B i.i.d. sample graphs s1, S2,..., sb ~ S and sampling a single tour per graph i.e. ; ~ pe(. s), the gradient in (4) is approximated with Monte Carlo sampling as follows:\nAlgorithm 2 Active Search\n1: procedure AcTIVESEARcH(input s, 0, number of candidates K, B, ) 2: 7T RANDOMSOLUTION() 3: LL(|s) 4: n-[k] 5: for t = 1...n do 6: 7; ~ SAMPLESOLUTION(pe(.|s)) for i E {1,..., B} 7: j ARGMIN(L(1|s)...L(B|s)) 8: LjL(js) 9: if Lj < L then 10: TTj 11: LLj 12: end if. go<1(L(m|s)-b)Vo1ogpo(ts|s) 13: 14: 0ADAM(0,ge) 15: bab+(1-a)(bi) 16: end for 17: return 18: end procedure\nL nd if 1(L(r|s)-b)Vglogpo(i|s) 0 ADAM(0,ge) axb+(1-a)(=1 bi) for\nby Ov, to learn the expected tour length found by our current policy pe given an input sequence s.. The critic is trained with stochastic gradient descent on a mean squared error objective between its. predictions be..(s) and the actual tour lengths sampled by the most recent policy. The additional objective is formulated as\nCritic's architecture for TSP. We now explain how our critic maps an input sequence s intc a baseline prediction be. (s). Our critic comprises three neural network modules: 1) an LSTM encoder, 2) an LSTM process block and 3) a 2-layer ReLU neural network decoder. Its encoder has the same architecture as that of our pointer network's encoder and encodes an input sequence s intc a sequence of latent memory states and a hidden state h. The process block, similarly to (Vinyals et al.2015a), then performs P steps of computation over the hidden state h. Each processing step updates this hidden state by glimpsing at the memory states as described in Appendix|A.1and feed. the output of the glimpse function as input to the next processing step. At the end of the process block, the obtained hidden state is then decoded into a baseline prediction (i.e a single scalar) by two fully connected layers with respectively d and 1 unit(s).\nOur training algorithm, described in Algorithm [1] is closely related to the asynchronous advan tage actor-critic (A3C) proposed in (Mnih et al.[2016), as the difference between the sampled tour lengths and the critic's predictions is an unbiased estimate of the advantage function. We perform our updates asynchronously across multiple workers, but each worker also handles a mini-batch of graphs for better gradient estimates."}, {"section_index": "6", "section_name": "4.1 SEARCH STRATEGIES", "section_text": "Sampling. Our first approach is simply to sample multiple candidate tours from our stochastic pol-. icy pe(.[s) and select the shortest one. In contrast to heuristic solvers, we do not enforce our model to sample different tours during the process. However, we can control the diversity of the sampled tours with a temperature hyperparameter when sampling from our non-parametric softmax (see Ap. pendix|A.2). This sampling process yields significant improvements over greedy decoding, which. always selects the index with the largest probability. We also considered perturbing the pointing\nB 1 |bo(Si)-L(i|Si)| C(0v) = B i=1\nAs evaluating a tour length is inexpensive, our TSP agent can easily simulate a search procedure at inference time by considering multiple candidate solutions per graph and selecting the best. This inference process resembles how solvers search over a large set of feasible solutions. In this paper we consider two search strategies detailed below, which we refer to as sampling and active search.\nTable 1: Different learning configurations\nLearn on Sampling Refining Configuration training data on test set on test set RL pretraining-Greedy. Yes No No Active Search (AS). No Yes Yes RL pretraining-Sampling Yes Yes No RL pretraining-Active Search Yes Yes Yes\nmechanism with random noise and greedily decoding from the obtained modified policy, similarl to (Cho|[2016), but this proves less effective than sampling in our experiments.\nActive Search. Rather than sampling with a fixed model and ignoring the reward informatio obtained from the sampled solutions, one can refine the parameters of the stochastic policy pe durin inference to minimize E~pe(|s) L( | s) on a single test input s. This approach proves especiall competitive when starting from a trained model. Remarkably, it also produces satisfying solutior when starting from an untrained model. We refer to these two approaches as RL pretraining-Activ Search and Active Search because the model actively updates its parameters while searching fo candidate solutions on a single test instance.\nActive Search applies policy gradients similarly to Algorithm|1 but draws Monte Carlo samples over. candidate solutions 1 ... B ~ pe([s) for a single test input. It resorts to an exponential moving. average baseline, rather than a critic, as there is no need to differentiate between inputs. Our Active Search training algorithm is presented in Algorithm|2] We note that while RL training does not require supervision, it still requires training data and hence generalization depends on the training. data distribution. In contrast, Active Search is distribution independent. Finally, since we encode a set of cities as a sequence, we randomly shuffle the input sequence before feeding it to our pointer network. This increases the stochasticity of the sampling procedure and leads to large improvements. in Active Search."}, {"section_index": "7", "section_name": "5 EXPERIMENTS", "section_text": "We conduct experiments to investigate the behavior of the proposed Neural Combinatorial Opti mization methods. We consider three benchmark tasks, Euclidean TSP20, 50 and 100, for which we generate a test set of 1, 000 graphs. Points are drawn uniformly at random in the unit square [0, 1|2"}, {"section_index": "8", "section_name": "5.1 EXPERIMENTAL DETAILS", "section_text": "Across all experiments, we use mini-batches of 128 sequences, LSTM cells with 128 hidden units. and embed the two coordinates of each point in a 128-dimensional space. We train our models with the Adam optimizer (Kingma & Ba2014) and use an initial learning rate of 10-3 for TSP20 and TSP50 and 10-4 for TSP100 that we decay every 5000 steps by a factor of 0.96. We initialize our parameters uniformly at random within [-0.08, 0.08] and clip the L2 norm of our gradients to 1.0. We use up to one attention glimpse. When searching, the mini-batches either consist of replications of the test sequence or its permutations. The baseline decay is set to a = 0.99 in Active Search. Our model and training code in Tensorflow (Abadi et al.J [2016) will be made availabe soon. Table[1|summarizes the configurations and different search strategies used in the experiments. The variations of our method, experimental procedure and results are as follows.\nSupervised Learning. In addition to the described baselines, we implement and train a pointer network with supervised learning, similarly to (Vinyals et al.,2015b). While our supervised data consists of one million optimal tours, we find that our supervised learning results are not as good as those reported in by (Vinyals et al.[|2015b). We suspect that learning from optimal tours is harder for supervised pointer networks due to subtle features that the model cannot figure out only by looking at given supervised targets. We thus refer to the results in (Vinyals et al.]2015b) for TSP20 and TSP50 and report our results on TSP100, all of which are suboptimal compared to other approaches.\nTable 2: Average tour lengths (lower is better). Results marked (t) are from (Vinyals et al. 2015b\nRL pretraining. For the RL experiments, we generate training mini-batches of inputs on the fly. and update the model parameters with the Actor Critic Algorithm[1 We use a validation set of 10, 000 randomly generated instances for hyper-parameters tuning. Our critic consists of an encoder. network which has the same architecture as that of the policy network, but followed by 3 process ing steps and 2 fully connected layers. We find that clipping the logits to [10, 10] with a tanh() activation function, as described in Appendix[A.2] helps with exploration and yields marginal per formance gains. The simplest search strategy using an RL pretrained model is greedy decoding. i.e. selecting the city with the largest probability at each decoding step. We also experiment with de coding greedily from a set of 16 pretrained models at inference time. For each graph, the tour found by each individual model is collected and the shortest tour is chosen. We refer to those approaches as RL pretraining-greedy and RL pretraining-greedy@ 16.\nRL pretraining-Sampling. For each test instance, we sample 1, 280, 000 candidate solutions from a pretrained model and keep track of the shortest tour. A grid search over the temperature hyperparameter found respective temperatures of 2.0, 2.2 and 1.5 to yield the best results for TSP20, TSP50 and TSP100. We refer to the tuned temperature hyperparameter as T*. Since sampling does not require parameter udpates and is entirely parallelizable, we use a larger batch size for speed purposes.\nRL pretraining-Active Search. For each test instance, we initialize the model parameters from. a pretrained RL model and run Active Search for up to 10, 000 training steps with a batch size of 128, sampling a total of 1, 280, 000 candidate solutions. We set the learning rate to a hundredth of the initial learning rate the TSP agent was trained on (i.e. 10-5 for TSP20/TSP50 and 10-6 for. TSP100).\nActive Search. We allow the model to train much longer to account for the fact that it starts fron scratch. For each test graph, we run Active Search for 100, 000 training steps on TSP20/TSP50 anc 200, 000 training steps on TSP100.\nWe report the average tour lengths of our approaches on TSP20, TSP50, and TSP100 in Table 2 Notably, results demonstrate that training with RL significantly improves over supervised learning\nSupervised RL pretraining Christo OR Tools' Task AS Optimal Learning greedy greedy@16 sampling AS -fides local search TSP20 3.88(t) 3.89 3.82 3.82 3.96 4.30 3.85 3.82 TSP50 6.09(t) 5.95 5.80 5.70 5.70 5.87 6.62 5.80 5.68 TSP100 10.81 8.30 7.97 7.88 7.83 8.19 9.18 7.99 7.77\nWe compare our methods against 3 different baselines of increasing performance and complex. ity: 1) Christofides, 2) the vehicle routing solver from OR-Tools (Google 2016) and 3) optimal. ty. Christofides solutions are obtained in polynomial time and guaranteed to be within a 1.5 ratic of optimality. OR-Tools improves over Christofides' solutions with simple local search operators,. including 2-opt (Johnson1990) and a version of the Lin-Kernighan heuristic (Lin & Kernighan. 1973), stopping when it reaches a local minimum. In order to escape poor local optima, OR-. Tools' local search can also be run in conjunction with different metaheuristics, such as simu-. ated annealing (Kirkpatrick et al.[1983), tabu search (Glover & Laguna2013) or guided local. search (Voudouris & Tsang1999). OR-Tools' vehicle routing solver can tackle a superset of the. TSP and operates at a higher level of generality than solvers that are highly specific to the TSP. While. not state-of-the art for the TSP, it is a common choice for general routing problems and provides a. reasonable baseline between the simplicity of the most basic local search operators and the sophisti-. cation of the strongest solvers. Optimal solutions are obtained via Concorde (Applegate et al.l[2006) and LK-H's local search (Helsgaun] 2012} 2000). While only Concorde provably solves instances to optimality, we empirically find that LK-H also achieves optimal solutions on all of our test sets. after 50 trials per graph (which is the default parameter setting).\nTable 4: Average tour lengths of RL pretraining-Sampling and RL pretraining-Active Search as they sample more solutions. Corresponding running times on a single Tesla K80 GPU are in parantheses.\nRL pretraining Task # Solutions Sampling T = 1 Sampling T = T* Active Search 128 5.80 (3.4s) 5.80 (3.4s) 5.80 (0.5s) 1,280 5.77 (3.4s) 5.75 (3.4s) 5.76 (5s) 12,800 5.75 (13.8s) 5.73 (13.8s) 5.74 (50s) TSP50 128,000 5.73 (110s) 5.71 (110s) 5.72 (500s) 1,280,000 5.72 (1080s) 5.70 (1080s) 5.70 (5000s) 128 8.05 (10.3s) 8.09 (10.3s) 8.04 (1.2s) 1,280 8.00 (10.3s) 8.00 (10.3s) 7.98 (12s) 12,800 7.95 (31s) 7.95 (31s) 7.92 (120s) TSP100 128,000 7.92 (265s) 7.91 (265s) 7.87 (1200s) 1,280,000 7.89 (2640s) 7.88 (2640s) 7.83 (12000s)\n(Vinyals et al.| 2015b). All our methods comfortably surpass Christofides' heuristic, including RI pretraining-Greedy which also does not rely on search. Table[3|compares the running times of ou greedy methods to the aforementioned baselines, with our methods running on a single Nvidia Tesla. K80 GPU, Concorde and LK-H running on an Intel Xeon CPU E5-1650 v3 3.50GHz CPU and OR Tool on an Intel Haswell CPU. We find that both greedy approaches are time-efficient but still quite. far from optimality.\nSearching at inference time proves crucial to get closer to optimality but comes at the expense o longer running times. Fortunately, the search from RL pretraining-Sampling and RL pretraining Active Search can be stopped early with a small performance tradeoff in terms of the final objective This can be seen in Table[4] where we show their performances and corresponding running times as a function of how many solutions they consider.\nWe present a more detailed comparison of our methods in Figure 3] where we sort the ratios to optimality of our different learning configurations. RL pretraining-Sampling and RL pretraining Active Search are the most competitive Neural Combinatorial Optimization methods and recover the optimal solution in a significant number of our test cases. We find that for small solution spaces. RL pretraining-Sampling, with a finetuned softmax temperature, outperforms RL pretraining-Active Search with the latter sometimes orienting the search towards suboptimal regions of the solution space (see TSP50 results in Table 4|and Figure [3. Furthermore, RL pretraining-Sampling benefits from being fully parallelizable and runs faster than RL pretraining-Active Search. However, for larger solution spaces, RL-pretraining Active Search proves superior both when controlling for the number of sampled solutions or the running time. Interestingly, Active Search - which starts from an untrained model - also produces competitive tours but requires a considerable amount of time (respectively 7 and 25 hours per instance of TSP50/TSP100). Finally, we show randomly picked example tours found by our methods in Figure4|in Appendix|A.4\nTable 3: Running times in seconds (s) of greedy methods compared to OR Tool's local search and solvers that. find the optimal solutions. Time is measured over the entire test set and averaged. LK-H was run for 50 trials per graph (the default parameter setting). It is likely that optimal solutions were found in fewer trials, resulting. in shorter running times.\nRL pretraining OR-Tools' Optimal Task greedy greedy@16 local search Concorde LK-H TSP50 0.003s 0.04s 0.02s 0.05s 0.14s TSP100 0.01s 0.15s 0.10s 0.22s 0.88s\n8.04 (1.2s) 7.98 (12s) 7.92 (120s) 7.87 (1200s) 7.83 (12000s)\nWe also find that many of our RL pretraining methods outperform OR-Tools' local search, including RL pretraining-Greedy @16 which runs similarly fast. Table6Jin Appendix|A.3]presents the perfor mance of the metaheuristics as they consider more solutions and the corresponding running times In our experiments, Neural Combinatorial proves superior than Simulated Annealing but is slightly less competitive that Tabu Search and much less so than Guided Local Search.\nPlanar TSP50 Planar TSP100 1.10 RL pretraining-Greedy RL pretraining-Greedy 1.13 RL pretraining-Greedy@16 RL pretraining-Greedy@16 RL pretraining-Sampling RL pretraining-Sampling RL pretraining-Active Search 1.10 RL pretraining-Active Search Active Search Active Search 1.05 Local Search (LK-H) = Exact (Concorde) Local Search (LK-H) = Exact (Concorde) 1.05 1.00 1.00 0 25 50 75 100 0 25 50 75 100 Percentile Percentile\nIn this section, we discuss how to apply Neural Combinatorial Optimization to other problems thar. the TSP. In Neural Combinatorial Optimization, the model architecture is tied to the given combi. natorial optimization problem. Examples of useful networks include the pointer network, when the output is a permutation or a truncated permutation or a subset of the input, and the classical seq2sec. model for other kinds of structured outputs. For combinatorial problems that require to assign labels. to elements of the input, such as graph coloring, it is also possible to combine a pointer module anc. a softmax module to simultaneously point and assign at decoding time. Given a model that encodes an instance of a given combinatorial optimization task and repeatedly branches into subtrees to con. struct a solution, the training procedures described in Section4|can then be applied by adapting the. reward function depending on the optimization problem being considered..\nAdditionally, one also needs to ensure the feasibility of the obtained solutions. For certain combi natorial problems, it is straightforward to know exactly which branches do not lead to any feasibl solutions at decoding time. We can then simply manually assign them a zero probability when de coding, similarly to how we enforce our model to not point at the same city twice in our pointin nechanism (see Appendix[A.1). However, for many combinatorial problems, coming up with a fea sible solution can be a challenge in itself. Consider, for example, the Travelling Salesman Problen with Time Windows, where the travelling salesman has the additional constraint of visiting eac city during a specific time window. It might be that most branches being considered early in the tou do not lead to any solution that respects all time windows. In such cases, knowing exactly whic oranches are feasible requires searching their subtrees, a time-consuming process that is not muc easier than directly searching for the optimal solution unless using problem-specific heuristics.\nRather than explicitly constraining the model to only sample feasible solutions, one can also let the model learn to respect the problem's constraints. A simple approach, to be verified experimentally in future work, consists in augmenting the objective function with a term that penalizes solutions for. violating the problem's constraints, similarly to penalty methods in constrained optimization. While. this does not guarantee that the model consistently samples feasible solutions at inference time, this is not necessarily problematic as we can simply ignore infeasible solutions and resample from the. model (for RL pretraining-Sampling and RL-pretraining Active Search). It is also conceivable to combine both approaches by assigning zero probabilities to branches that are easily identifiable as. infeasible while still penalizing infeasible solutions once they are entirely constructed..\nAs an example of the flexibility of Neural Combinatorial Optimization. we consider the KnapSacl problem, another intensively studied problem in computer science. Given a set of n items i = 1...n each with weight w; and value v; and a maximum weight capacity of W, the O-1 KnapSack problen consists in maximizing the sum of the values of items present in the knapsack so that the sum of the\nFigure 3: Sorted tour length ratios to optimality\nweights is less than or equal to the knapsack capacity\nWith wi, V, and W taking real values, the problem is NP-hard (Kellerer et al.] 2004). A naive. heuristic is to take the items ordered by their weight-to-value ratios until they fill up the weight ca pacity. Two simple heuristics are ExpKnap, which employs branch-and-bound with Linear Program ming bounds (Pisinger1995), and MinKnap, which uses dynamic programming with enumerative. bounds (Pisinger|1997). Exact solutions can also be obtained by quantizing the weights to high pre-. cisions and then performing dynamic programming with pseudo-polynomial complexity (Bertsimas & Demir2002).\nWe apply the pointer network and encode each KnapSack instance as a sequence of 2D vectors (wi, v). At decoding time, the pointer network points to items to include in the knapsack and stops when the total weight of the items collected so far exceeds the weight capacity. We generate three datasets, KNAP50, KNAP100 and KNAP200, of a thousand instances with items' weights and values drawn uniformly at random in [0, 1]. Without loss of generality (since we can scale the items' weights), we set the capacities to 12.5 for KNAP50 and 25 for KNAP100 and KNAP200. We present the performances of RL pretraining-Greedy and Active Search (which we run for 5, 000 training steps) in Table 5|and compare them to the following baselines: 1) random search (which we let sample as many feasible solutions seen by Active Search), 2) the greedy value-to-weight ratio heuristic, 3) MinKnap, 4) ExpKnap, 5) OR-Tools' KnapSack solver (Google 2016) and 6) optimality (which we obtained by quantizing the weights to high precisions and using dynamic programming).\nTable 5: Results of RL pretraining-Greedy and Active Search on KnapSack (higher is better)\nThis paper presents Neural Combinatorial Optimization, a framework to tackle combinatorial op timization with reinforcement learning and neural networks. We focus on the traveling salesmar problem (TSP) and present a set of results for each variation of the framework. Experiments demon strate that Neural Combinatorial Optimization achieves close to optimal results on 2D Euclidear graphs with up to 100 nodes. Our results, while still far from the strongest solvers (especially those which are optimized for one problem), provide an interesting research avenue for using neural net works as a general tool for tackling combinatorial optimization problems."}, {"section_index": "9", "section_name": "ACKNOWLEDGMENTS", "section_text": "The authors would like to thank Vincent Furnon, Mustafa Ispir, Lukasz Kaiser, Oriol Vinyals, Barre. Zoph, the Google Brain team and the anonymous ICLR reviewers for insightful comments and discussion."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Martin Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. Tensorflow: A system for large scale machine learning. arXiv preprint arXiv:1605.08695, 2016.\nmax Vi SE{1,2,...,n} iES subject to Wi < W iES\nRL pretraining MinKnap/ Task Active Search Random Search Greedy Optimal greedy ExpKnap / OR-Tools KNAP50 19.86 20.07 17.91 19.24 20.07 20.07 KNAP100 40.27 40.50 33.23 38.53 40.50 40.50 KNAP200 57.10 57.45 35.95 55.42 57.45 57.45\nSreeram V. B. Aiyer, Mahesan Niranjan, and Frank Fallside. A theoretical investigation into th performance of the Hopfield model. IEEE Transactions on Neural Networks, 1(2):204-215, 1990\nBernard Angeniol, Gael De La Croix Vaubois, and Jean-Yves Le Texier. Self-organizing feature maps and the Travelling Salesman Problem. Neural Networks. 1(4):289-293. 1988\nDavid L Applegate, Robert E Bixby, Vasek Chvatal, and William J Cook. The traveling salesmar problem: a computational study. Princeton university press. 2011.\nDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In ICLR, 2015.\nDimitris Bertsimas and Ramazan Demir. An approximate dynamic programming approach to mul-. tidimensional knapsack problems. Management Science. 48(4):550-565. 2002\nEdmund Burke, Graham Kendall, Jim Newall, Emma Hart, Peter Ross, and Sonia Schulenburg. Hyperheuristics: An emerging direction in modern search technology. Springer. 2003\nLaura I. Burke. Neural methods for the Traveling Salesman Problem: insights from operations research. Neural Networks, 7(4):681-690, 1994.\nGeorge Dantzig, Ray Fulkerson, and Selmer Johnson. Solution of a large-scale traveling-salesman Droblem. Journal of th. of America. 1954\nRichard Durbin. An analogue a pproach to the Travelling Salesman. Nature, 326:16, 1987.\nAndrew Howard Gee. Problem solving with optimization networks. PhD thesis, Citeseer, 1993\nFred Glover and Manuel Laguna. Tabu Search. Springer. 2013\nKeld Helsgaun. An effective implementation of the Lin-Kernighan traveling salesman. Europea Journal of Operational Research, 126:106-130, 2000\nKeld Helsgaun. LK-H, 2012. URLhttp://akira.ruc.dk/~keld/research/LKH/\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural Computations, 1997\nJohn J. Hopfield and David W. Tank. 'Neural' computation of decisions in optimization problems Biological Cybernetics, 52(3):141-152, 1985.\nNicos Christofides. Worst-case analysis of a new heuristic for the Travelling Salesman Problem. In Report 388. Graduate School of Industrial Administration, CMU, 1976.\nFavio Favata and Richard Walker. A study of the application of Kohonen-type neural networks to the travelling salesman problem. Biological Cybernetics, 64(6):463-468, 1991.\nDS Johnson. Local search and the traveling salesman problem. In Proceedings of 17th Inte. national Colloquium on Automata Languages and Programming, Lecture Notes in Compute Science,(Springer-Verlag, Berlin, 1990), pp. 443-460, 1990.\nHans Kellerer, Ulrich Pferschy, and David Pisinger. Knapsack Problems. Springer-Verlag Berlin Heidelberg, 2004.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2014\nTeuvo Kohonen. The self-organizing. . Proceedings of the IEEE, 78(9):1464-1480, 1990\nBert F. J. La Maire and Valeri M. Mladenov. Comparison of neural networks for solving the Travel ling Salesman Problem. In NEUREL, pp. 21-24. IEEE, 2012\nManfred Padberg and Giovanni Rinaldi. A branch-and-cut algorithm for the resolution of large scale symmetric traveling salesman problems. Society for Industrial and Applied Mathematics. 33:60-100, 1990.\nChristos H. Papadimitriou. The Euclidean Travelling Salesman Problem is NP-complete. Theoreti cal Computer Science, 4(3):237-244, 1977.\nDavid Pisinger. An expanding-core algorithm for the exact O-1 knapsack problem european journal of operational research. European Journal of Operational Research, pp. 175-187, 1995\nDavid Pisinger. A minimal algorithm for the O-1 knapsack problem. Operations Research, pp 758-767, 1997.\nFarah Sarwar and Abdul Aziz Bhatti. Critical analysis of Hopfield's neural network model for TSP and its comparison with heuristic algorithm for shortest path computation. In IBCAST. 2012.\nKate A. Smith. Neural networks for combinatorial optimization: a review of more than a decade of research. INFORMS Journal on Computing, 1999.\nlya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks In Advances in Neural Information Processing Systems, pp. 3104-3112, 2014.\nAndrew I. Vakhutinsky and Bruce L. Golden. A hierarchical strategy for solving traveling salesman problems using elastic nets. Journal of Heuristics, 1(1):67-76, 1995.\nBarret Zoph and Quoc Le. Neural architecture search with reinforcement learning. arXiv preprin arXiv:1611.01578, 2016."}, {"section_index": "11", "section_name": "A.1 POINTING AND ATTENDING", "section_text": "Pointing mechanism: Its computations are parameterized by two attention matrices Wref, Wq Rdd and an attention vector v E Rd as follows:.\nvT.tanh(Wref.ri+Wq.q) if i # (j) for all j < i for i = 1 otherwise A(ref, q; Wref, Wq, v) = softmax(u).\np((j)|(< j),s) df A(enc1:n,decj)\nSetting the logits of cities that already appeared in the tour to -oo, as shown in Equation 8] ensures that our model only points at cities that have yet to be visited and hence outputs valid TSP tours\nAttending mechanism: Specifically, our glimpse function G(ref, q) takes the same inputs as the following computations:\np = A(ref,q;Wg W 9 k G(ref,q; Wg def > 1 riPi. i=1\nThe glimpse function G essentially computes a linear combination of the reference vectors weighted by the attention probabilities. It can also be applied multiple times on the same reference set ref:.\ndef go = q gt = G(ref, gl-1; Wg. def M 9\ndef go = q def G(ref, gi-1; Wg 9l A\nFinally, the ultimate gi vector is passed to the attention function A(ref, gi; Wre f, Wg, v) to produc the probabilities of the pointing mechanism. We observed empirically that glimpsing more tha once with the same parameters made the model less likely to learn and barely improved the results\nA(ref,q,T; Wref, Wq,v) def softmax(u/T)\nwhere T is a temperature hyperparameter set to T = 1 during training. When T > 1, the distribution represented by A(ref, q) becomes less steep, hence preventing the model from being overconfident.\nwhere C is a hyperparameter that controls the range of the logits and hence the entropy of A(ref, q)\ndef A(ref, q; Wref, Wq. softmax(Ctanh(u))"}, {"section_index": "12", "section_name": "A.3 OR TOOL'S METAHEURISTICS BASELINES FOR TSP", "section_text": "Table 6: Performance of OR-Tools' metaheuristics as they consider more solutions. Corresponding runnin times in seconds (s) on a single Intel Haswell CPU are in parantheses..\nTask #Solutions Simulated Annealing Tabu Search Guided Local Search 1 6.62 (0.03s) 6.62 (0.03s) 6.62 (0.03s) 128 5.81 (0.24s) 5.79 (3.4s) 5.76 (0.5s) 1,280 5.81 (4.2s) 5.73 (36s) 5.69 (5s) TSP50 12,800 5.81 (44s) 5.69 (330s) 5.68 (48s) 128,000 5.81 (460s) 5.68 (3200s) 5.68 (450s) 1,280,000 5.81 (3960s) 5.68 (29650s) 5.68 (4530s) 1 9.18 (0.07s) 9.18 (0.07s) 9.18 (0.07s) 128 8.00 (0.67s) 7.99 (15.3s) 7.94 (1.44s) 1,280 7.99 (15.7s) 7.93 (255s) 7.84 (18.4s) TSP100 12,800 7.99 (166s) 7.84 (2460s) 7.77 (182s) 128,000 7.99 (1650s) 7.79 (22740s) 7.77 (1740s) 1,280,000 7.99 (15810s) 7.78 (208230s) 7.77 (16150s)\n.00.05 Z0.05 128 5.81 (0.24s) 5.79 (3.4s) 5.76 (0.5s) 1,280 5.81 (4.2s) 5.73 (36s) 5.69 (5s) TSP50 12,800 5.81 (44s) 5.69 (330s) 5.68 (48s) 128,000 5.81 (460s) 5.68 (3200s) 5.68 (450s) 1,280,000 5.81 (3960s) 5.68 (29650s) 5.68 (4530s) 1 9.18 (0.07s) 9.18 (0.07s) 9.18 (0.07s) 128 8.00 (0.67s) 7.99 (15.3s) 7.94 (1.44s) 1,280 7.99 (15.7s) 7.93 (255s) 7.84 (18.4s) TSP100 12,800 7.99 (166s) 7.84 (2460s) 7.77 (182s) 128,000 7.99 (1650s) 7.79 (22740s) 7.77 (1740s) 1,280,000 7.99 (15810s) 7.78 (208230s) 7.77 (16150s) 4 SAMPLE TOURS RL pretraining RL pretraining RL pretraining -Greedy -Sampling -Active Search Active Search Optimal (5.934) (5.734) (5.688) (5.827) (5.688) RL pretraining RL pretraining RL pretraining -Greedy -Sampling -Active Search. Active Search Optimal (7.558) (7.467) (7.384) (7.507) (7.260)\nRL pretraining RL pretraining RL pretraining -Greedy -Sampling -Active Search Active Search Optimal 5.934 5.688) (5.827) 5.688 RL pretraining RL pretraining RL pretraining Greedy Sampling -Active Search Active Search Optimal 7.558 7.467 7.384 7.260\nFigure 4: Sample tours. Top: TSP50; Bottom: TSP100\n9.18 (0.07s) 7.94 (1.44s) 7.84 (18.4s) 7.77 (182s) 7.77 (1740s) 7.77 (16150s)\nRL pretraining RL pretraining RL pretraining -Greedy Sampling Active Search Active Search Optimal (5.934) (5.734) (5.688) (5.827) (5.688)\nretraining RLpretraining RL pretraining Greedy Sampling -Active Search. Active Search. Optimal 1.558) (7.467) (7.384) (7.507) (7.260)"}] |
SJzCSf9xg | [{"section_index": "0", "section_name": "ON DETECTING ADVERSARIAL PERTURBATIONS", "section_text": "Jan Hendrik Metzen & Tim Genewein & Volker Fischer & Bastian Bischoff\nMachine learning and deep learning in particular has advanced tremendously on perceptual tasks in recent years. However, it remains vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system while being quasi-imperceptible to a human. In this work, we propose to augment deep neural networks with a small \"detector\"' subnetwork which is trained on the binary classification task of distinguishing genuine data from data containing adversarial perturbations. Our method is orthogonal to prior work on addressing adversarial perturbations, which has mostly focused on making the classification network itself more robust. We show empirically that adversarial perturbations can be detected surprisingly well even though they are quasi-imperceptible to humans Moreover, while the detectors have been trained to detect only a specific adversary they generalize to similar and weaker adversaries. In addition, we propose an adversarial attack that fools both the classifier and the detector and a novel training procedure for the detector that counteracts this attack."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "In the last years, machine learning and in particular deep learning methods have led to impressive performance on various challenging perceptual tasks, such as image classification (Russakovsky et al.[[2015, He et al.[[2016) and speech recognition (Amodei et al.[[2016). Despite these advances perceptual systems of humans and machines still differ significantly. As Szegedy et al.[(2014] have shown, small but carefully directed perturbations of images can lead to incorrect classification with high confidence on artificial systems. Yet, for humans these perturbations are often visually imperceptible and do not stir any doubt about the correct classification. In fact, so called adversarial examples are crucially characterized by requiring minimal perturbations that are quasi-imperceptible to a human observer. For computer vision tasks, multiple techniques to create such adversarial examples have been developed recently. Perhaps most strikingly, adversarial examples have been shown to transfer between different network architectures, and networks trained on disjoint subsets of data (Szegedy et al.]2014). Adversarial examples have also been shown to translate to the real world (Kurakin et al.|2016), e.g., adversarial images can remain adversarial even after being printed and recaptured with a cell phone camera. Moreover, Papernot et al.[(2016a) have shown that a potential attacker can construct adversarial examples for a network of unknown architecture by training an auxiliary network on similar data and exploiting the transferability of adversarial inputs.\nIn this paper, we propose to train a binary detector network, which obtains inputs from intermediate. feature representations of a classifier, to discriminate between samples from the original data set. and adversarial examples. Being able to detect adversarial perturbations might help in safety- and security-critical semi-autonomous systems as it would allow disabling autonomous operation and."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "The vulnerability to adversarial inputs can be problematic and even prevent the application of deep. learning methods in safety- and security-critical applications. The problem is particularly severe when human safety is involved, for example in the case of perceptual tasks for autonomous driving. Methods to increase robustness against adversarial attacks have been proposed and range from. augmenting the training data (Goodfellow et al.|2015) over applying JPEG compression to the input (Dziugaite et al.|2016) to distilling a hardened network from the original classifier network (Papernot et al.|2016b). However, for some recently published attacks (Carlini & Wagner2016), no effective. counter-measures are known yet.\nrequesting human intervention (along with a warning that someone might be manipulating the system However, it might intuitively seem very difficult to train such a detector since adversarial inputs ar generated by tiny, sometimes visually imperceptible, perturbations of genuine examples. Despite thi intuition, our results on CIFAR10 and a 10-class subset of ImageNet show that a detector networ that achieves high accuracy in detection of adversarial inputs can be trained successfully. Moreove while we train a detector network to detect perturbations of a specific adversary, our experiment show that detectors generalize to similar and weaker adversaries. An obvious attack against ou approach would be to develop adversaries that take into account both networks, the classification an the adversarial detection network. We present one such adversary and show that we can harden th detector against such an adversary using a novel training procedure."}, {"section_index": "3", "section_name": "2 BACKGROUND", "section_text": "Since their discovery bySzegedy et al.(2014), several methods to generate adversarial examples hav been proposed. Most of these methods generate adversarial examples by optimizing an image w.r. the linearized classification cost function of the classification network by maximizing the probability for all but the true class or minimizing the probability of the true class (e.g., (Goodfellow et al. 2015), (Kurakin et al.[2016). The method introduced byMoosavi-Dezfooli et al.(2016b) estimates a linearization of decision boundaries between classes in image space and iteratively shifts an imag towards the closest of these linearized boundaries. For more details about these methods, please refe to Section3.1\nSeveral approaches exist to increase a model's robustness against adversarial attacks. Goodfellov et al.(2015) propose to augment the training set with adversarial examples. At training time, they minimize the loss for real and adversarial examples, while adversarial examples are chosen to foo the current version of the model. In contrast,Zheng et al.(2016) propose to append a stability term tc the objective function, which forces the model to have similar outputs for samples of the training set and their perturbed versions. This differs from data augmentation since it encourages smoothness o the model output between original and distorted samples instead of minimizing the original objective on the adversarial examples directly. Another defense-measure against certain adversarial attacl methods is defensive distillation (Papernot et al.|2016b), a special form of network distillation, tc train a network that becomes almost completely resistant against attacks such as the L-BFGS attack (Szegedy et al.2014) and the fast gradient sign attack (Goodfellow et al.|2015). However, Carlin & Wagner[(2016) recently introduced a novel method for constructing adversarial examples tha manages to (very successfully) break many defense methods, including defensive distillation. Ir fact, the authors find that previous attacks were very fragile and could easily fail to find adversaria examples even when they existed. An experiment on the cross-model adversarial portability (Rozsa et al.2016) has shown that models with higher accuracies tend to be more robust against adversaria examples, while examples that fool them are more portable to less accurate models.\nTanay & Griffin(2016) challenged the linear-explanation hypothesis by constructing classes of images. that do not suffer from adversarial examples under a linear classifier. They also point out that if the. change in activation w' n grows linearly with the dimensionality of the problem, so does the activation\nEven though the existence of adversarial examples has been demonstrated several times on many lifferent classification tasks, the question of why adversarial examples exist in the first place anc. vhether they are sufficiently regular to be detectable, which is studied in this paper, has remained. open. Szegedy et al.[(2014) speculated that the data-manifold is filled with \"pockets'' of adversaria.. nputs that occur with very low probability and thus are almost never observed in the test set. Yet hese pockets are dense and so an adversarial example is found virtually near every test case. The authors further speculated that the high non-linearity of deep networks might be the cause for the. existence of these low-probability pockets. Later, Goodfellow et al.(2015) introduced the linear. xplanation: Given an input and some adversarial noise n (subject to: Tnoo < e), the dot product. between a weight vector w and an adversarial input xadv = x + n is given by wT xadv = wTx + wTn. The adversarial noise n causes a neuron's activation to grow by w'n. The max-norm constraint on. 7 does not allow for large values in one dimension, but if x and thus n are high-dimensional, many small changes in each dimension of n can accumulate to a large change in a neuron's activation. The. conclusion was that \"linear behavior in high-dimensional spaces is sufficient to cause adversaria examples\".\nwTx. Instead of the linear explanation, Tanay et al. provide a different explanation for the existence o adversarial examples, including a strict condition for the non-existence of adversarial inputs, a nove measure for the strength of adversarial examples and a taxonomy of different classes of adversaria inputs. Their main argument is that if a learned class boundary lies close to the data manifold, but th boundary is (slightly) tilted with respect to the manifold'I then adversarial examples can be found b perturbing points from the data manifold towards the classification boundary until the perturbed inpu crosses the boundary. If the boundary is only slightly tilted, the distance required by the perturbatiol to cross the decision-boundary is very small, leading to strong adversarial examples that are visuall almost imperceptibly close to the data. Tanay et. al further argue that such situations are particularl likely to occur along directions of low variance in the data and thus speculate that adversaria examples can be considered an effect of an over-fitting phenomenon that could be alleviated by prope regularization, though it is completely unclear how to regularize neural networks accordingly.\nRecently, Moosavi-Dezfooli et al.(2016a) demonstrated that there even exist universal, image. agnostic perturbations which, when added to all data points, fool deep nets on a large fraction o. ImageNet validation images. Moreover, they showed that these universal perturbations are to. certain extent also transferable between different network architectures. While this observation raise. interesting questions about geometric properties and correlations of different parts of the decisioi. ooundary of deep nets, potential regularities in adversarial perturbations may also help detecting then. However, the existence of universal perturbations does not necessarily imply that the adversaria. examples generated by data-dependent adversaries will be regular. Actually, Moosavi-Dezfooli et a 2016a) show that universal perturbations are not unique and that there even exist many differen. universal perturbations which have little in common. This paper studies if data-dependent adversaria. oerturbations can nevertheless be detected reliably and answers this question affirmatively.."}, {"section_index": "4", "section_name": "3 METHODS", "section_text": "In this section, we introduce the adversarial attacks used in the experiments, propose an approach for detecting adversarial perturbations, introduce a novel adversary that aims at fooling both th classification network and the detector, and propose a training method for the detector that aims a1 counteracting this novel adversary."}, {"section_index": "5", "section_name": "3.1 GENERATING ADVERSARIAL EXAMPLES", "section_text": "Fast method: One simple approach to compute adversarial examples was described byGoodfello. et al.(2015). The applied perturbation is the direction in image space which yields the highes. increase of the linearized cost function under loo-norm. This can be achieved by performing one ste in the direction of the gradient's sign with step-width e:.\nHere, e is a hyper-parameter governing the distance between adversarial and original image. As suggested in Kurakin et al.(2016) we also refer to this as the fast method due to its non-iterative and hence fast computation.\nBasic Iterative method (lo. and l): As an extension, Kurakin et al.(2016) introduced an iterative version of the fast method, by applying it several times with a smaller step size a and clipping all. pixels after each iteration to ensure results stay in the e-neighborhood of the original image:.\n.adv Iv + Q sgn(VxJcs(xadv,Ytrue(x)))} LO .adv L = Clip n+1\nimage x, and Jcs(x, y(x)) be the cost function of the classifier (typically cross-entropy). We briefly introduce different adversarial attacks used in the remainder of the paper.\n.adv = x + E sgn(VxJcls(x, Ytrue(x))\n' It is easier to imagine a linear decision-boundary - for neural networks this argument must be translated intc a non-linear equivalent of boundary tilting\nYtrue radv = x. = Project |VxJc1s(xadv, Ytrue(x))|2\nDeepFool method: Moosavi-Dezfooli et al.(2016b) introduced the DeepFool adversary which iteratively perturbs an image xadv. Therefore, in each step the classifier is linearized around xadv and the closest class boundary is determined. The minimal step according to the lp distance from xadv be used within DeepFool, and here we focus on the l2- and loo-norm. The technical details can be found in (Moosavi-Dezfooli et al.]2016b). We would like to note that we use the variant of DeepFool presented in the first version of the paper (https: //arxiv. org/abs/1511. 04599v1) since we found it to be more stable compared to the variant reported in the final version."}, {"section_index": "6", "section_name": "3.2 DETECTING ADVERSARIAL EXAMPLES", "section_text": "We augment classification networks by (relatively small) subnetworks, which branch off the main network at some layer and produce an output padv E [0, 1] which is interpreted as the probability of the input being adversarial. We call this subnetwork \"adversary detection network'' (or \"detector' for short) and train it to classify network inputs into being regular examples or examples generated by a specific adversary. For this, we first train the classification networks on the regular (non-adversarial) dataset as usual and subsequently generate adversarial examples for each data point of the train set using one of the methods discussed in Section|3.1 We thus obtain a balanced, binary classification dataset of twice the size of the original dataset consisting of the original data (label zero) and the corresponding adversarial examples (label one). Thereupon, we freeze the weights of the classification network and train the detector such that it minimizes the cross-entropy of pady and the labels. The details of the adversary detection subnetwork and how it is attached to the classification network are specific for datasets and classification networks. Thus, evaluation and discussion of various design choices of the detector network are provided in the respective section of the experimental results."}, {"section_index": "7", "section_name": "3.3 DYNAMIC ADVERSARIES AND DETECTORS", "section_text": "In the worst case, an adversary might not only have access to the classification network and its gradien but also to the adversary detector and its gradien( In this case, the adversary might potentiall generate inputs to the network that fool both the classifier (i.e., get classified wrongly) and fool th detector (i.e., look innocuous). In principle, this can be achieved by replacing the cost Jcls(x, ytrue(x) by (1 - )Jcls(x, ytrue(x)) + oJdet(x, 1), where E [0, 1] is a hyperparameter and Jdet(x, 1) is the cost (cross-entropy) of the detector for the generated x and the label one, i.e., being adversarial. Ar adversary maximizing this cost would thus aim at letting the classifier mis-label the input x anc making the detectors output padv as small as possible. The parameter o allows trading off these twc objectives. For generating x, we propose the following extension of the basic iterative (loo) method\n.adv + sgn(VxJdet(xadv,1))]} Clipx{xadv +(1 x:\nNote that we found a smaller a to be essential for this method to work; more specifically, we use a = 0.25. Since such an adversary can adapt to the detector, we call it a dynamic adversary. To\n2We would like to emphasize that is a stronger assumption than granting the adversary access to only the original classifier's predictions and gradients since the classifier's predictions need often be presented to a user (and thus also to an adversary). The same is typically not true for the predictions of the adversary detector as they will only be used internally.\nFollowingKurakin et al.(2016), we refer to this method as the basic iterative method and use a = 1 i.e., we change each pixel maximally by 1. The number of iterations is set to 10. In addition to this. method, which is based on the loo-norm, we propose an analogous method based on the l2-norm: in. each step this method moves in the direction of the (normalized) gradient and projects the adversarial. examples back on the e-ball around x (points with l, distance e to x) if the l, distance exceeds e:\n*5 *5 *5 3 16 16 32 64 64 10 Inpu Conv Res Res Res GAP Dens 32 32 32 32 32 32 1616 8x8 1x1 1x1 AD(0) | AD(1) | AD(2) | AD(3) | TAd(4) |\nFigure 1: (Top) ResNet used for classification. Numbers on top of arrows denote the number o1. feature maps and numbers below arrows denote spatial resolutions. Conv denotes a convolutiona. layer, Res*5 denotes a sequence of 5 residual blocks as introduced byHe et al.(2016), GAP denotes. a global-average pooling layer and Dens a fully-connected layer. Spatial resolutions are decreasec. by strided convolution and the number of feature maps on the residual's shortcut is increased by 1x1 convolutions. All convolutional layers have 3x3 receptive fields and are followed by batcl normalization and rectified linear units. (Bottom) Topology of detector network, which is attached tc one of the AD(i) positions. MP denotes max-pooling and is optional: for AD(3), the second pooling. layer is skipped, and for AD(4), both pooling layers are skipped..\ncounteract dynamic adversaries, we propose dynamic adversary training, a method for hardening detectors against dynamic adversaries. Based on the approach proposed byGoodfellow et al.[(2015) instead of precomputing a dataset of adversarial examples, we compute the adversarial examples on-the-fly for each mini-batch and let the adversary modify each data point with probability O.5 Note that a dynamic adversary will modify a data point differently every time it encounters the data point since it depends on the detector's gradient and the detector changes over time. We extend this approach to dynamic adversaries by employing a dynamic adversary, whose parameter o is selected uniform randomly from [0, 1|, for generating the adversarial data points during training. By training the detector in this way, we implicitly train it to resist dynamic adversaries for various values of o. In principle, this approach bears the risk of oscillation and unlearning for o > 0 since both, the detecto and adversary, adapt to each other (i.e., there is no fixed data distribution). In practice, however, we found this approach to converge stably without requiring careful tuning of hyperparameters."}, {"section_index": "8", "section_name": "4 EXPERIMENTAL RESULTS", "section_text": "In this section, we present results on the detectability of adversarial perturbations on the CIFAR10 dataset (Krizhevsky2oo9), both for static and dynamic adversaries. Moreover, we investigate whether adversarial perturbations are also detectable in higher-resolution images based on a subset ol the ImageNet dataset (Russakovsky et al.|2015)."}, {"section_index": "9", "section_name": "4.1 CIFAR10", "section_text": "We use a 32-layer Residual Network (He et al.| 2016 ResNet) as classifier. The structure of the network is shown in Figure[1] The network has been trained for 100 epochs with stochastic gradient descent and momentum on 45000 data points from the train set. The momentum term was set to 0.9 and the initial learning rate was set to 0.1, reduced to 0.01 after 41 epochs, and further reduced to 0.00 after 61 epochs. After each epoch, the network's performance on the validation data (the remainin, 500o data points from the train set) was determined. The network with maximal performance on the validation data was used in the subsequent experiments (with all tunable weights being fixed) This network's accuracy on non-adversarial test data is 91.3%. We attach an adversary detection subnetwork (called \"detector\"' below) to the ResNet. The detector is a convolutional neural networl using batch normalization (Ioffe & Szegedy|2015) and rectified linear units. In the experiments, we investigate different positions where the detector can be attached (see also Figure[1).\n*5 *5 *5 3 16 16 32 64 64 10 Input Res Res Res GAP Dens Cony 32 32 32 32 32 32 1616 8x 8 1x1 1x1 AD(0) | AD(1) | AD(2) | AD(3) : AD(4) : adv. detector opt. opt. 1x1 96 192 192 2 |AD(i)| : Conv MP Conv MP Conv Conv GAP\ndetector opt. opt. 1x1 96 192 192 2 AD(i)| : Conv MP Conv MP Conv Conv GAP\n1.0 1.0 . . Fast 0.9 0.9 Iterative (l2) ? O . 0.8 0.8 Iterative (loo) K 0.7 DeepFool (l2) 0.7 : DeepFool (lx) . Fast 0.6 0.6 A DeepFool (l2) No Iterative (l2) DeepFool (lx) * 0.5 0.5 V Iterative (l) 0.4 0.4 0.0 0.1 0.20.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 AD(0) AD(1) AD(2) AD(3) AD(4) Predictive accuracy on adv. images. Attachment depth\n1.0 1.0 . . Fast 0.9 0.9 Iterative (l2) ? . 0.8 0.8 Iterative (l) A 0.7 * 0.7 DeepFool (l2) A * DeepFool (l) 0.6 . Fast 0.6 DeepFool (l2) A No . Iterative (l2) DeepFool (lx) 0.5 0.5 V Iterative (l) 0.4 0.4 0.0 0.1 0.2 0.3 0.40.5 0.60.7 0.8 0.9 1.0 AD(0) AD(1) AD(2) AD(3) AD(4) Predictive accuracy on adv. images Attachment depth\nFigure 2: (Left) Illustration of detectability of different adversaries and values for e on CIFAR1. The x-axis shows the predictive accuracy of the CIFAR10 classifier on adversarial examples of the. est data for different adversaries. The y-axis shows the corresponding detectability of the adversaria examples, with O.5 corresponding to chance level. \"No\"' corresponds to an \"adversary\" that leaves th. nput unchanged. (Right) Analysis of the detectability of adversarial examples of different adversarie for different attachment depths of the detector."}, {"section_index": "10", "section_name": "4.1.1 STATIC ADVERSARIES", "section_text": "In this subsection, we investigate a static adversary, i.e., an adversary that only has access to the classification network but not to the detector. The detector was trained for 20 epochs on 45000 data points from the train set and their corresponding adversarial examples using the Adam optimizei Kingma & BaJ2015) with a learning rate of 0.0001 and 1 = 0.99, 2 = 0.999. The remaining 5000 data points from the CIFAR10 train set are used as validation data and used for model selection The detector was attached to position AD(2) (see Figure[1) except for the DeepFool-based adversaries where the detector was attached to AD(4); see below for a discussion. For the \"Fast'' and \"Iterative adversaries, the parameter e from Section[3.1 was chosen from [1, 2, 3, 4] for lo.-based methods anc from [20, 40, 60, 80] for l2-based methods; larger values of e generally result in reduced accuracy o the classifier but increased detectability. For the \"Iterative\"' method with l2-norm, we used a = 20 i.e., in each iteration we make a step of l2 distance 20. Please note that these values of e are based or assuming a range of [0, 255 per color channel of the input.\nFigure2[(left) compares the detectability3|of different adversaries. In general, points in the lower left of the plot correspond to stronger adversaries because their adversarial examples are harder to detect and at the same time fool the classifier on most of the images. Detecting adversarial examples works surprisingly well given that no differences are perceivable to humans for all shown settings: the detectability is above 80% for all adversaries which decrease classification accuracy below 30% and above 90% for adversaries which decrease classification accuracy below 10%. Comparing the different adversaries, the \"Fast\"' adversary can generally be considered as a weak adversary, the DeepFool based methods as relatively strong adversaries, and the \"Iterative\"' method being somewhere in-between. Moreover, the methods based on the l2-norm are generally slightly stronger than their loo-norm counter-parts.\nFigure[2|(right) compares the detectability of different adversaries for detectors attached at different points to the classification network. e was chosen minimal under the constraint that the classification. accuracy is below 30%. For the \"Fast\"' and \"Iterative\"' adversaries, the attachment position AD(2 works best, i.e., attaching to a middle layer where more abstract features are already extracted but still the full spatial resolution is maintained. For the DeepFool methods, the general pattern is similar except for AD(4), which works best for these adversaries..\nFigure 3|illustrates the generalizability of trained detectors for the same adversary with different. choices of e: while a detector trained for large e does not generalize well to small e, the other direction works reasonably well. Figure4 shows the generalizability of detectors trained for one adversary. when tested on data from other adversaries (c was chosen again minimal under the constraint that the\n3Detectability refers to the accuracy of the detector. The detectability on the test data is calculated as follows for every test sample, a corresponding adversarial example is generated. The original and the corresponding adversarial examples form a joint test set (twice the size of the original test set). This test set is shuffled anc the detector is evaluated on this dataset. Original and corresponding adversarial example are thus processec independently.\nFigure 3: Transferability on CIFAR10 of detector trained for adversary with maximal distortion e fit when tested on the same adversary with distortion etest. Different plots show different adversaries Numbers correspond to the accuracy of detector on unseen test data.\nT Fast 0.97 0.96 0.92 0.71 0.75 S Iterative (l.) 0.69 0.89 0.87 0.65 0.68 Iterative (l2) 0.61 0.79 0.87 0.59 0.63 DeepFool (l2) 0.61 0.69 0.76 0.82 0.80 DeepFool (lx) 0.68 0.80 0.80 0.78 0.79 aast (e lerreeee (eg) poopeed Adversary fit"}, {"section_index": "11", "section_name": "4.1.2 DYNAMIC ADVERSARIES", "section_text": "In this section, we evaluate the robustness of detector networks to dynamic adversaries (see Sectior. 3.3). For this, we evaluate the detectability of dynamic adversaries for E {0.0, 0.1, . .., 1.0}. We. use the same optimizer and detector network as in Section4.1.1 When evaluating the detectability of dynamic adversaries with o close to 1, we need to take into account that the adversary might choose. to solely focus on fooling the detector, which is trivially achieved by leaving the input unmodified. Thus, we ignore adversarial examples that do not cause a misclassification in the evaluation of. the detector and evaluate the detector's accuracy on regular data versus the successful adversarial. examples. Figure 5 shows the results of a dynamic adversary with e = 1 against a static detector which was trained to only detect static adversaries, and a dynamic detector, which was explicitly. trained to resist dynamic adversaries. As can be seen, the static detector is not robust to dynamic. adversaries since for certain values of , namely = 0.3 and = 0.4, the detectability is close tc\nFast Iterative (l2) Iterative (l.) 0.90 0.70 0.51 0.50 20 0.77 0.69 0.66 0.56 0.89 0.77 0.65 0.57 0.94 0.97 0.89 0.70 teest 40 0.84 0.87 0.85 0.77 0.95 0.96 0.94 0.88 2 2 0.93 0.99 0.99 0.98 3 0.85 0.90 0.92 0.91 3 0.95 0.97 0.98 0.97 0.85 0.99 1.00 1.00 0.86 0.91 0.94 0.95 0.96 0.98 0.99 0.99 1 2 3 4 20 40 60 80 1 2 3 4 Efit Efit E fit\nFigure 4: Transferability on CIFAR10 of detector trained for one adversary when tested on other. adversaries. The maximal distortion e of the adversary (when applicable) has been chosen minimally such that the predictive accuracy of the classifier is below 30%. Numbers correspond to the accuracy. of the detector on unseen test data..\nclassification accuracy is below 30%): we can see that detectors generalize well between loo- and l2-norm based variants of the same approach. Moreover, detectors trained on the stronger \"Iterative' adversary generalize well to the weaker \"Fast\"' adversary but not vice versa. Detectors trained for the DeepFool-based methods do not generalize well to other adversaries; however, detectors trained for the \"Iterative\"' adversaries generalize relatively well to the DeepFool adversaries.\n0.90 0.85 0.80 0.75 0.70 0.65 0.60 Static Detector. 0.55 Dynamic Detector 0.50 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Predictive accuracy on adv. images\nFigure 5: Illustration of detectability versus classification accuracy of a dynamic adversary fo different values of o against a static and dynamic detector. The parameter o has been chosen as E {0.0, 0.1, ..., 1.0}, with smaller values of corresponding to lower predictive accuracy, i.e being further on the left.\nchance level while the predictive performance of the classifier is severely reduced to less than 30% accuracy. A dynamic detector is considerably more robust and achieves a detectability of more thar 70% for any choice of o."}, {"section_index": "12", "section_name": "4.2 10-CLASS IMAGENET", "section_text": "In this section, we report results for static adversaries on a subset of ImageNet consisting of all data. from ten randomly selected classes4 The motivation for this section is to investigate whether adver. sarial perturbations can be detected in higher-resolution images and for other network architectures than residual networks. We limit the experiment to ten classes in order to keep the computational. resources required for computing the adversarial examples small and avoid having too similar classes. which would oversimplify the task for the adversary. We use a pretrained VGG16 (Simonyan &. Zisserman2015) as classification network and add a layer before the softmax which selects only. the 10 relevant class entries from the logits vector. Based on preliminary experiments, we attach the. detector network after the fourth max-pooling layer. The detector network consists of a sequence. of five 3x3 convolutions with 196 feature maps each using batch-normalization and rectified linear. units, followed by a 1x1 convolution which maps onto the 10 classes, global-average pooling, and. a softmax layer. An additional 2x2 max-pooling layer is added after the first convolution. Note. that we did not tune the specific details of the detector network; other topologies might perform. better than the results reported below. When applicable, we vary e E [2, 4, 6] for lo.-based methods. and e E [400, 800, 1200] for l2. Moreover, we limit changes of the DeepFool adversaries to an lx. distance of 6 since the adversary would otherwise sometimes generate distortions which are clearly. perceptible. We train the detector for 500 epochs using the Adam optimizer with a learning rate of. 0.0001 and 1 = 0.99, 2 = 0.999.\nFigure|6|compares the detectability of different static adversaries. All adversaries fail to decrease. predictive accuracy of the classifier below the chance level of 0.1 (note that predictive accuracy refers. to the accuracy on the 10-class problem not on the full 1000-class problem) for the given values of. e. Nevertheless, detectability is 85% percent or more with the exception of the \"Iterative\"' l2-based. adversary with e = 400. For this adversary, the detector only reaches chance level. Other choices of the detector's attachment depth, internal structure, or hyperparameters of the optimizer might achieve.\n4The synsets of the selected classes are: palace; joystick; bee; dugong, Dugong dugon; cardigan; modem confectionery, confectionary, candy store; valley, vale; Persian cat; stone wall. Classes were selected by randomly drawing 10 ILSVRC2012 Synset-IDs (i.e. integers from 1, 1000), using the randint function of the python-package numpy after initializing numpy's random number generator seed with O. This results in a train set of 10000 images, a validation set of 2848 images, and a test set (from ImageNet's validation data) of 500 images.\n1.0 Fast 0.9 Iterative () Iterative(x) 0.8 DeepFool () 0.7 DeepFool () No 0.6 0.5 0.4 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Predictive accuracy on adv. images\n1.0 Fast 0.9 Iterative ( A Iterative () 0.8 DeepFool () 0.7 DeepFool ( r No 0.6 0.5 D 0.4 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Dr dicti dy\nFigure 7: Transferability on 10-class ImageNet of detector trained for adversary with maximal distortion e fit when tested on the same adversary with distortion etest. Different plots show different. adversaries. Numbers correspond to the accuracy of the detector on unseen test data.\nFigure7 illustrates the transferability of the detector between different values of e. The results are roughly analogous to the results on CIFAR10 in Section4.1.1} detectors trained for an adversary for a small value of e work well for the same adversary with larger e but not vice versa. Note that a detector trained for the \"Iterative\" l2-based adversary with e = 1200 can detect the changes of the same adversary with e = 400 with 78% accuracy; this emphasizes that this adversary is no1 principally undetectable but that rather the optimization of a detector for this setting is difficult Figure[8|shows the transferability between adversaries: transferring the detector works well between similar adversaries such as between the two DeepFool adversaries and between the Fast and Iterative adversary based on the loo distance. Moreover, detectors trained for DeepFool adversaries work well on all other adversaries. In summary, transferability is not symmetric and typically works best between similar adversaries and from stronger to weaker adversary."}, {"section_index": "13", "section_name": "5 DISCUSSION", "section_text": "Figure 6: Illustration of detectability of different adversaries and values for e on 10-class ImageNet. The x-axis shows the predictive accuracy of the ImageNet classifier on adversarial examples of the. test data for different adversaries. The y-axis shows the corresponding detectability of the adversarial examples, with 0.5 corresponding to chance level..\nbetter results: however, this failure case emphasizes that the detector has to detect very subtle pattern. and the optimizer might get stuck in bad local optima or plateaus\nWhy can tiny adversarial perturbations be detected that well? Adopting the boundary tilting perspec. tive of Tanay & Griffin|(2016), strong adversarial examples occur in situations in which classification. boundaries are tilted against the data manifold such that they lie close and nearly parallel to the. data manifold. A detector could (potentially) identify adversarial examples by detecting inputs. which are slightly off the data manifold's center in the direction of a nearby class boundary. Thus the detector can focus on detecting inputs which move away from the data manifold in a certain. direction, namely one of the directions to a nearby class boundary (the detector does not have explicit\nFast 0.89 0.88 0.63 0.84 0.89 test Iterative (lo) 0.84 0.87 0.61 0.81 0.89 Iterative (l2) 0.66 0.74 0.90 0.88 0.87 DeepFool (l2) 0.61 0.66 0.78 0.85 0.81 DeepFool (l) 0.80 0.83 0.69 0.83 0.91 aast (t3 lrrreee (eg) lereeee (e) penneed Adversary fit\nFigure 8: Transferability on 10-class ImageNet of detector trained for one adversary when tested oi. other adversaries. The maximal distortion of the loo-based Iterative adversary has been chosen a = 2 and as e = 800 for the l-based adversary. Numbers correspond to the accuracy of detector oi. unseen test data.\nWhy is the joint classifier/detector system harder to fool? For a static detector, there might be areas which are adversarial to both classifier and detector: however, this will be a (small) subset of the areas which are adversarial to the classifier alone. Nevertheless, results in Section|4.1.2[show that such a static detector can be fooled along with the classifier. However, a dynamic detector is considerably harder to fool: on the one hand, it might further reduce the number of areas which are both adversarial to classifier and detector. On the other hand, the areas which are adversarial to the detector might become increasingly non-regular and difficult to find by gradient descent-based adversaries.\nIn this paper, we have shown empirically that adversarial examples can be detected surprisingly well. using a detector subnetwork attached to the main classification network. While this does not directly. allow classifying adversarial examples correctly, it allows mitigating adversarial attacks against machine learning systems by resorting to fallback solutions, e.g., a face recognition might request. human intervention when verifying a person's identity and detecting a potential adversarial attack. Moreover, being able to detect adversarial perturbations may in the future enable a better understand ing of adversarial examples by applying network introspection to the detector network. Furthermore.. the gradient propagated back through the detector may be used as a source of regularization of the. classifier against adversarial examples. We leave this to future work. Additional future work will be developing stronger adversaries that are harder to detect by adding effective randomization which. would make selection of adversarial perturbations less regular. Finally, developing methods for. training detectors explicitly such that they can detect many different kinds of attacks reliably at the same time would be essential for safety- and security-related applications.\naast ( lrrereeee Adyersary fit\n<nowledge of class boundaries but it might learn about their direction implicitly from the adversarial. training data). However, training a detector which captures these directions in a model with small. capacity and generalizes to unseen data requires certain regularities in adversarial perturbations. The. results of[Moosavi-Dezfooli et al.(2016a) suggest that there may exist regularities in the adversarial. erturbations since universal perturbations exist. However, these perturbations are not unique and data-dependent adversaries might potentially choose among many different possible perturbations. in a non-regular way, which would be hard to detect. Our positive results on detectability suggest. that this is not the case for the tested adversaries. Thus, our results are somewhat complementary. to Moosavi-Dezfooli et al.(2016a): while they show that universal, image-agnostic perturbations. exist, we show that image-dependent perturbations are sufficiently regular to be detectable. Whether. a detector generalizes over different adversaries depends mainly on whether the adversaries choose. among many different possible perturbations in a consistent way..\nAlex Krizhevsky. Learning Multiple Layers of Features from Tiny Images. Master's thesis, University of Toronto, 2009.\nSeyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. Universa adversarial perturbations. arXiv:1610.08401, 2016a.\nSeyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. DeepFool: A simple and accurate method to fool deep neural networks. In Computer Vision and Pattern Recognition (CVPR), 2016b.\nNicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. Distillation as a. Defense to Adversarial Perturbations against Deep Neural Networks. In Symposium on Security & Privacy, pp. 582-597, San Jose, CA, 2016b.\nAndras Rozsa, Terrance E. Boult, and Manuel Gunther. Are accuracy and robustness correlated? In. International Conference on Machine Learning and Applications (ICMLA), December 2016\nWe would like to thank Michael Herman and Michael Pfeiffer for helpful discussions and their feedback on drafts of this article. Moreover, we would like to thank the developers of Theano The Theano Development Team 2016), keras (https: //keras.io), and seaborn (http: //\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In Computer Vision and Pattern Recognition (CVPR), 2O16\nSergey Ioffe and Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In International Conference on Machine Learning (ICML), pp. 448-456, Lille, 2015.\nKaren Simonyan and Andrew Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. In JInternational C\nStephan Zheng, Yang Song, Thomas Leung, and Ian Goodfellow. Improving the Robustness of Deep. Neural Networks via Stability Training. In Computer Vision and Pattern Recognition CVPR, 2016"}] |
ryh_8f9lg | [{"section_index": "0", "section_name": "CLASSLESS ASSOCIATION USING NEURAL NETWORKS", "section_text": "1,2, Marcus Liwickil\nThe goal of this paper is to train a model based on the relation between two in- stances that represent the same unknown class. This scenario is inspired by the Symbol Grounding Problem and the association learning in infants. We propose a novel model called Classless Association. It has two parallel Multilayer Percep- trons (MLP) that uses one network as a target of the other network, and vice versa. In addition, the presented model is trained based on an EM-approach, in which the output vectors are matched against a statistical distribution. We generate four classless datasets based on MNIST, where the input is two different instances of the same digit. In addition, the digits have a uniform distribution. Furthermore, our classless association model is evaluated against two scenarios: totally supervised and totally unsupervised. In the first scenario, our model reaches a good perfor mance in terms of accuracy and the classless constraint. In the second scenario our model reaches better results against two clustering algorithms"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Infants are able to learn the binding between abstract concepts to the real world via their sensory input. For example, the abstract concept ball is binding to the visual representation of a rounded object and the auditory representation of the phonemes /b//a/ /l/. This scenario can be seen as the Symbol Grounding Problem (Harnad1990). Moreover, infants are also able to learn the association between different sensory input signals while they are still learning the binding of the abstract concepts. Several results have shown a correlation between object recognition (visual) and vocabulary acquisition (auditory) in infants (Balaban & Waxman]1997|Asano et al.]2015). One example of this correlation is the first words that infants have learned. In that case, the words are mainly nouns which are visible concepts, such as, dad, mom, ball, dog, cat (Gershkoff-Stowe & Smith|2004). As a result, we can define the previous scenario in terms of a machine learning tasks. More formally the task is defined by learning the association between two parallel streams of data that represent the same unknown class (or abstract concept). Note that this task is different from the supervised association where the data has labels. First, the semantic concept is unknown in our scenario whereas it is known in the supervised case. Second, both classifiers needs to agree on the same coding scheme for each sample pair during training. In contrast, the coding-scheme is already pre-defined before training in the supervised case. Figure|1|shows an example of the difference between a supervised association task and our scenario.\nIn this paper, we are proposing a novel model that is trained based on the association of two input samples of the same unknown class. The presented model has two parallel Multilayer Perceptrons (MLPs) with an Expectation-Maximization (EM) (Dempster et al.[|1977) training rule that matches the network output against a statistical distribution. Also, both networks agree on the same classificatior because one network is used as target of the other network, and vice versa. Our model has some"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Usually, classifiers requires labeled data for training. However, the presented scenario needs an alternative training mechanism. One way is to train based on statistical distributions.Casey[(1986] proposed to solve the OCR problem using language statistics for inferring form images to characters. Later on, Knight et al.(2006) applied a similar idea to machine translation. Recently, Sutskever et al.. (2015) defined the Output Distribution Matching (ODM) cost function for dual autoencoders and. generative networks.\nAbstract Coding Input Concept Scheme for Classifiers Task Samples Association each input (same) (same) (different) (different) Known before and after Training 3 classifier \"three\" 1 Supervised O10 Association O0000 3 classifier \"three\" 2 Unknown before Known after 3 Training Training \"unknown\" classifier [?] 0000 1 Classless ? Association (our work) 3 TRAINING 00100 classifier \"unknown\" C. C. C C : 2\nsimilarities with Siamese Networks proposed byChopra et al.(2005). They introduced their model for supervised face verification where training is based on constraints of pairs of faces. The constraints exploit the relation of two faces that may or may not be instances of the same person. However, there are some differences to our work. First, our training rule does not have pre-defined classes before training, whereas the Siamese Network requires labeled samples. Second, our model only requires instances of the same unknown class, whereas the Siamese network requires two types of input pairs a) instances of the same person and b) instances of two different persons. Our contributions in this paper are\nFigure 1: Difference between the supervised and classless association tasks. The classless association is more challenging that the supervised association because the model requires to learn to discriminate the semantic concept without labels. In addition, both classifiers need to agree on the same coding scheme for each semantic concept. In contrast, the mentioned information is already known in the supervised association scenario.\nWe define a novel training rule based on matching the output vectors of the presented mode. and a statistical distribution. Note that the output vectors are used as symbolic feature.. similar to the Symbol Grounding Problem. Furthermore, the proposed training rule is. based on an EM-approach and classified each sample based on generated pseudo-classe.. (Section2.1). We propose a novel architecture for learning the association in the classless scenaric. Moreover, the presented model uses two parallel MLPs, which require to agree on the. same class for each input sample. This association is motivated by the correlation betweet different sensory input signals in infants development. In more detail, one network is the. target of the other network, and vice versa. Also, note that our model is gradient-based anc. can be extended to deeper architectures (Section2.2). We evaluate our classless association task against two cases: totally supervised and totall. unsupervised. In this manner, we can verify the range of our results in terms of supervisec. and unsupervised cases since our model is neither totally supervised nor totally unsupervisec. We compare against a MLP trained with labels as the supervised scenario (upper bound) anc. two clustering algorithms (K-means and Hierarchical Agglomerative) as the unsupervisec. scenario (lower bound). First, our model reaches better results than the clustering. Seconc. our model shows promising results with respect to the supervised scenario (Sections|3|an. 4)"}, {"section_index": "3", "section_name": "2 METHODOLOGY", "section_text": "In this paper, we are interested in the classless association task in the following scenario: two input. instances x(1) and x(2) belong to the same unknown class c, where x(1) E X(1) and x(2) E X(2) are two disjoint sets, and the goal is to learn the output classification of x(1) and x(2) is the same. c(1) = c(2), where c(1) and c(2) e C is the set of possible classes. With this in mind, we present a. model that has two parallel Multilayer Perceptrons (MLPs) that are trained with an EM-approach that. associates both networks in the following manner: one network uses the other network as a target, and vice versa. We explain how the output vectors of the network are matched to a statistical distribution. in Section2.1and the classless association learning is presented in Section2.2"}, {"section_index": "4", "section_name": "2.1 STATISTICAL CONSTRAINT", "section_text": "where x E Rn is the input vector, 0 encodes the parameters of the MLP, and z E Rc is the outpu vector. Moreover, the output vectors (z1, . .., Zm) of a mini-batch of size m are matched to a targe distribution (E[z1,..., m] ~ E Rc), e.g., uniform distribution. We have selected a uniform distribution because it is an ideal case to have a balanced dataset for any classifier. However, it i. possible to extend to different distribution. We introduce a new parameter that is a weighting vector E Rc. The intuition behind it is to guide the network based on a set of generated pseudo-classes c. These pseudo-classes can be seen as cluster indexes that group similar elements. With this in mind, we also propose an EM-training rule for learning the unknown class given a desired targe distribution. We want to point out that the pseudo-classes are internal representations of the networl that are independent of the labels.\nThe E-step obtains the current statistical distribution given the output vectors (z1, . .., Zm) and the weighting vector (y). In this case, an approximation of the distribution is obtained by the following. equation\nwhere y is the weighting vector, z; is the output vector of the network, M is the number of elements, and the function power'|is the element-wise power operation between the output vector and the weighting vector. We have used the power function because the output vectors (z1, . .., Zm) are quite. similar between them at the initial state of the network, and the power function provides an initial boost for learning to separate the input samples in different pseudo-classes in the first iterations.. Moreover, we can retrieve the pseudo-classes by the maximum value of the following equation.\nwhere c* is the pseudo-class, which are used in the M-step for updating the MLP weights. Also, note. that the pseudo-classes are not updated in an online manner. Instead, the pseudo-classes are updated after a certain number of iterations. The reason is the network requires a number of iterations to learn the common features.\nThe M-step updates the weighting vector y given the current distribution . Also, the MLP parameters (0) are updated given the current classification given by the pseudo-classes. The cost function is the variance between the distribution and the desired statistical distribution, which is defined by.\nI We decide to use power function instead of z' in order to simplify the index notation\nOne of our constraint is to train a MLP without classes. As a result, we use an alternative training rule based on matching the output vectors and a statistical distribution. For simplicity, we explain our training rule using a single MLP with one hidden layer, which is defined by\nz = network(x; 0)\nM 1 z power(Zi,Y) M i=1\n= arg maxc power(zi,Y)\ncost = ( - $)2\nWeighting Vector Input Probability Output Approximation Update Z1,..., Zm Statistical X1,..., Xm MLP 7 Weighting Constraint Vector Update MLP Pseudo-classes Target Statistical Cm Distribution 1 E[Z1,...,Zm] ~ E-STEP M-STEP\nProbability Input Output Approximation Update X1,..., Xm. Statistical MLP 41 Weighting Constraint Vector Update MLP Pseudo-classes Target Statistical Distribution E[Z1, ..., Zm] ~\nFigure 2: The proposed training rule applied to a single MLP. E-steps generates a set of pseudo-classes. C1, ..., Cm for each output in the mini-batch of size m, and a probability approximation of the. output vectors in the mini-batch. M-step updates the MLP weights given the pseudo-classes and the weighting vector giving the target statistical distribution ..\nwhere is the current statistical distribution of the output vectors, and is a vector that represent the desired statistical distribution, e.g. uniform distribution. Then, the weighting vector is updated via gradient descent\nwhere a is the learning rate and .cost is the derivative w.r.t y. Also, the MLP weights are updated via the generated pseudo-classes, which are used as targets in the backpropagation step."}, {"section_index": "5", "section_name": "2.2 CLASSLESS ASSOCIATION LEARNING", "section_text": "Our second constraint is to classify both input samples as the same class and different from the other. classes. Note that the pseudo-class (Equation 3) is used as identification for each input sample and it is not related to the semantic concept or labels. The presented classless association model is. trained based on a statistical constraint. Formally, the input is represented by the pair x(1) E Rn1 and. x(2) E Rn2 where x(1) and x(2) are two different instances of the same unknown label. The classless. association model has two parallel Multilayer Perceptron M L P(1) and M L P(2) with training rule that follows an EM-approach (cf. Section2.1). Moreover, input samples are divided into several. mini-batches of size m.\nInitially, all input samples have random pseudo-classes c.. 1 and c same desired statistical distribution $. Also, the weighting vectors y(1) and (2) are initialized to one. Then each input element from the mini-batch is propagated forward to each MLP. Afterwards, an estimation of the statistical distribution for each MLP ((1) and 2(2)) is obtained. Furthermore, a. c(1) and c(2) c(2)) is obtained for each network. Note that this first part can be seen as an E-step from Section2.1 We want to point out that the pseudo-classes. are updated only after a number of iterations..\nY =y-a * Vycost\nIn summary, we propose an EM-training rule for matching the network output vectors and a desired. target statistical distribution. The E-Step generates pseudo-classes and finds an approximation of the. current statistical distribution of the output vectors. The M-Step updates the MLP parameters and the weighting vector. With this in mind, we adapt the mentioned training rule for the classless association. task. Figure[2|summarizes the presented EM training rule and its components.\nInput 1 Output 1 Weighting Vector Pseudo-classes 1 1 Statistical MLP(1) m Constraint m Update MLP Update MLP Statistical MLP(2) Constraint m (2) Input 2 Output 2 Weighting Vector Pseudo-classes 2 E-STEP M-STEP\nStatistical mLP(1) Constraint m Jpdate ML Update MLP 2 Statistical MLP(2) m m Constraint m 7(2) Input 2 Output 2 Weighting Vector Pseudo-classes 2 E-STEP M-STEP\nFigure 3: Overview of the presented model for classless association of two input samples thai. represent the same unknown classes. The association relies on matching the network output and a. statistical distribution. Also, it can be observed that our model uses the pseudo-classes obtained by M L P(1) as targets of M L P(2), and vice versa."}, {"section_index": "6", "section_name": "3 EXPERIMENTS", "section_text": "In this paper, we are interested in a simplified scenario inspired by the Symbol Grounding Problem and the association learning between sensory input signal in infants. We evaluated our model in four classless datasets that are generated from MNIST (Lecun & Cortes2010). The procedure of generating classless datasets from labeled datasets have been already applied in (Sutskever et al. 2015fHsu & Kira 2015]. Each dataset has two disjoint sets input 1 and input 2. The first dataset (MNIST) has two different instances of the same digit. The second dataset (Rotated-90 MNIST) has two different instances of the same digit, and all input samples in input 2 are rotated 90 degrees The third dataset (Inverted MNisT ) follows a similar procedures as the second dataset, but the transformation of the elements in input 2 is the invert function instead of rotation. The last dataset (Random Rotated MNIST) is more challenging because all elements in input 2 are randomly rotated between 0 and 2r. All datasets have a uniform distribution between the digits and the dataset size is 21,000 samples for training and 4,000 samples for validation and testing.\nThe following parameters turned out being optimal on the validation set. For the first three datasets each internal MLP relies on two fully connected layers of 200 and 100 neurons respectively. The learning rate for the MLPs was set to start at 1.0 and was continuously decaying by half after every 1,000 iterations. We set the initial weighting vector to 1.0 and updated after every 1,000 iterations as well. Moreover, the best parameters for the fourth dataset were the same for M LP(1) and different for M LP(2), which has two fully connected layers of 400 and 150 neurons respectively and the learning rate stars at 1.2. The target distribution is uniform for all datasets. The decay of the learning rate (Equation 5) for the weighting vector was given by 1/(100 + epoch)0.3, where epoch was the number of training iterations so far. The mini-batch size M is 5,250 sample pairs (corresponding to 25% of the training set) and the mean of the derivatives for each mini-batch is used for the back-propagation step of MLPs. Note that the mini-batch is quite big comparing with common setups. We infer from this parameter that the model requires a sample size big enough for estimating the uniform distribution and also needs to learn slower than traditional approaches. Our model was implemented in Torch.\nbetween the output approximation (2(1) and 2(2)) and the desired target distribution ($). Figure3 shows an overview of the presented model..\nMLP(1) MLP (2) Purity (%) Association Matrix (%) 10.9 Initial State. 10.9 00.7 0.6 2.4 0.4 24.8 3.0 Epoch 1,000 22.6 6 1.0 0.6 MLP(2) 0. 1.7 2.8 0.0 64.4 5.7 Epoch 3,000 65.8 8.9 10.0 8.1 O. MLP2 8.9 9.4 9.0 95.5 9.6 Epoch 49,000 95.6 9.6 9.7 9.6 9.6 MLp(2)\nFigure 4: Example of the presented model during classless training. In this example, there are ter pseudo-classes represented by each row of M L P(1) and M L P(2). Note that the output classification. are randomly selected (not cherry picking). Initially, the pseudo-classes are assigned randomly to al input pair samples, which holds a uniform distribution (first row). Then, the classless associatior model slowly start learning the features and grouping similar input samples. Afterwards, the outpu classification of both MLPs slowly agrees during training, and the association matrix shows the relation between the occurrences of the pseudo-classes..\nTo determine the baseline of our classless constraint, we compared our model against two cases totally supervised and totally unsupervised. In the supervised case, we used the same MLP parameter and training for a fair comparison. In the unsupervised scenario, we used k-means and agglomerative clustering to each set (input 1 and input 2) independently. The clustering algorithm implementatior are provided by scikit-learn (Pedregosa et al.2011).\nIn this work, we have generated ten different folds for each dataset and report the average results We introduce the Association Accuracy for measuring association, and it is defined by the following equation\nN 1 Association Accuracy - ) N i=1\n(2) , zero otherwise; c (1) (2) where the indicator function is one if c. and c. are the pseudo-classes for MLP(1) and MLP(2), respectively, and N is the number of elements. In addition, we also reported the Purity of each set (input 1 and input 2). Purity is defined by.\nwhere = {gt1, gt2, . .., gt;} is the set of ground-truth labels and C = {c1, C2, . . ., Ck} is the set of pseudo-classes in our model or the set of cluster indexes of K-means or Hierarchical Agglomerative clustering, and N is the number of elements..\nTable[1shows the Association Accuracy between our model and the supervised association task and. the Purity between our model and two clustering algorithms. First, the supervised association task. performances better that the presented model. This was expected because our task is more complex in relation to the supervised scenario. However, we can infer from our results that the presented model has a good performance in terms of the classless scenario and supervised method. Second, our model not only learns the association between input samples but also finds similar elements covered under the same pseudo-class. Also, we evaluate the purity of our model and found that the performance of our model reaches better results than both clustering methods for each set (input 1 and input 2).\nFigure|4|illustrates an example of the proposed learning rule. The first two columns (M LP(1) anc M LP(2)) are the output classification (Equation(3) and each row represents a pseudo-class. We have randomly selected 15 output samples for each MLP (not cherry picking). Initially, the pseudc classes are random selected for each MLP. As a result, the output classification of both networks does not show any visible discriminant element and the initial purity is close to random choice (firs row). After 1,o00 epochs, the networks start learning some features in order to discriminate the inpul samples. Some groups of digits are grouped together after 3,o00 epochs. For example, the first row of M LP(2) shows several digits zero, whereas M LP(1) has not yet agree on the same digit for that pseudo-class. In contrast, both MLPs have almost agree on digit one at the fifth row. Finally, the association is learned using only the statistical distribution of the input samples and each digit is represented by each pseudo-class.\nTable 1: Association Accuracy (%) and Purity (%) results. Our model is compared with the supervised scenario (class labels are provided) and with K-means and Hierarchical Agglomerative clustering (no class information).\nDataset Model Association Purity (%) Accuracy (%) input 1 input 2 supervised association 96.7 0.3 96.7 0.2 96.6 0.3 classless association 87.4 2.9 87.1 6.6 87.0 6.4 MNIST K-means 63.9 2.2 62.5 3.7 Hierarchical Agglomerative 64.9 4.7 64.3 5.5 supervised association 93.2 0.3 96.4 0.2 96.6 0.21 classless association 86.5 2.5 82.9 4.5 82.9 4.3 Rotated-90 MNIST K-means 65.0 2.8 64.0 3.6 Hierarchical Agglomerative 65.4 3.5 64.1 4.1 supervised association 93.2 0.3 96.5 0.2 96.5 0.2 classless association 89.2 2.4 89.0 6.8 89.1 6.8 Inverted MNIST K-means 64.8 2.0 65.0 2.5 Hierarchical Agglomerative - 64.8 4.4 64.4 3.8 supervised association 88.0 0.5 96.5 0.3 90.9 0.5 classless association 69.3 2.2 75.8 7.3 65.3 5.0 Random Rotated MNIST K-means 64.8 2.6 14.8 0.4 Hierarchical Agglomerative 65.9 2.8 15.2 0.5\nk 1 Purity(, C) = maxj|ci gtj N i=1\nFigure 5: Example of the best and worst results among all folds and datasets. It can be observed our model is able to learn to discriminate each digit (first row). However, the presented model has a. limitation that two or more digits are assigned to the same pseudo-class (last row of M LP(1) and. M LP(2)\nFigure 5 shows the best and worst results of our model in two cases. The first row is the bes. result from MNIST dataset. Each row of M LP(1) and M LP(2) represent a pseudo-class, and it can be observed that all digits are grouped together. In addition, the association matrix shows a. distribution per digit close to the desired uniform distribution, and the purity of each input is close tc the supervised scenario. In contrast, the second row is our worst result from Random Rotated MNIS7 dataset. In this example, we can observe that some digits are recognized by the same pseudo-class, fo. example, digit one and seven (first two rows). However, there two or more digits that are recognizec. by the samepseudo-class. For example, the last row shows that nine and four are merged. Our mode is still able to reach better results than the unsupervised scenario.."}, {"section_index": "7", "section_name": "5 CONCLUSION", "section_text": "MLP (1) MLP (2) Purity (%) Association Matrix (%) 20 09.7 18 8.9 XXX 38488 16 2 9.0 5333 14 3 8.7 227 12 95.9 9.7 Best Results 10 95.2 9.0 4444 6 9.2 7 9.1 8 9.4 555 8.5 MLP(2) 09.7 18 7.9 16 2 5.1 14 3 4.5 12 72.9 4 0.0 Worst Results 10 5.1 59.4 6 9.2 0 7 0.0 8 10.3 .2.1 MLP (2)\nIn this paper, we have shown the feasibility to train a model that has two parallel MLPs under the. following scenario: pairs of input samples that represent the same unknown classes. This scenario. was motivated by the Symbol Grounding Problem and association learning between sensory input signal in infants development. We proposed a model based on gradients for solving the classless. association. Our model has an EM-training that matches the network output against a statistical. distribution and uses one network as a target of the other network, and vice versa. Our model. reaches better performance than K-means and Hierarchical Agglomerative clustering. In addition,. we compare the presented model against a supervised method. We find that the presented model. with respect to the supervised method reaches good results because of two extra conditions in the. unsupervised association: unlabeled data and agree on the same pseudo-class. We want to point out. that our model was evaluated in an optimal case where the input samples are uniform distributed. and the number of classes is known. However, we will explore the performance of our model if the. number of classes and the statistical distrubtion are unknown. One way is to change the number of. pseudo-classes. This can be seen as changing the number of clusters k in k-means. With this in mind,. we are planning to do more exhaustive analysis of the learning behavior with deeper architectures.. Moreover, we will work on how a small set of labeled classes affects the performance of our model. (similar to semi-supervised learning). Furthermore, we are interested in replicating our findings in. more complex scenarios, such as, multimodal datasets like TVGraz (Khan et al.J2009) or Wikipedia. featured articles (Rasiwasia et al.]2010). Finally, our work can be applied to more classless scenarios. where the data can be extracted simultaneously from different input sources at the same time. Also.. transformation functions can be applied to input samples for creating the association without classes.."}, {"section_index": "8", "section_name": "ACKNOWLEDGMENTS", "section_text": "We would like to thank Damian Borth, Christian Schulze, Jorn Hees, Tushar Karayil, and Philip. Blandfort for helpful discussions."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "M T Balaban and S R Waxman. Do words facilitate object categorization in 9-month-old infants Journal of experimental child psvchology. 64(1):3-26. January 1997. ISSN 0022-0965\nRichard G Casey. Text OCR by solving a cryptogram. International Business Machines Incorporated Thomas J. Watson Research Center, 1986..\nSumit Chopra, Raia Hadsell, and Yann LeCun. Learning a similarity metric discriminatively, with application to face verification. In Computer Vision and Pattern Recognition, 2005. CVPR 2005 IEEE Computer Society Conference on. volume 1. pp. 539-546. IEEE. 2005\nAP Dempster, NM Laird, and DB Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society., 39(1):1-38, 1977..\nLisa Gershkoff-Stowe and Linda B Smith. Shape and the first hundred nouns. Child development, 75 (4):1098-114, 2004. 1SSN 0009-3920.\nStevan Harnad. The symbol grounding problem. Physica D: Nonlinear Phenomena, 42(1):335-346 1990.\nYen-Chang Hsu and Zsolt Kira. Neural network-based clustering using pairwise constraints. arXi preprint arXiv:1511.06321, 2015.\nInayatullah Khan, Amir Saffari, and Horst Bischof. Tvgraz: Multi-modal learning of object categorie. by combining textual and visual features. In AAPR Workshop, pp. 213-224, 2009.\nYann Lecun and Corinna Cortes. The MNIST database of handwritten digits. 2010\nF. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Pretten-. hofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research,. 12:2825-2830, 2011. N. Rasiwasia, J. Costa Pereira, E. Coviello, G. Doyle, G.R.G. Lanckriet, R. Levy, and N. Vasconcelos. A New Approach to Cross-Modal Multimedia Retrieval. In ACM International Conference on. Multimedia, pp. 251-260, 2010.\nIlya Sutskever, Rafal Jozefowicz, Karol Gregor, Danilo Rezende, Tim Lillicrap, and Oriol Vinyals Towards principled unsupervised learning. arXiv preprint arXiv:1511.06440, 2015\nMichiko Asano, Mutsumi Imai, Sotaro Kita, Keiichi Kitajo, Hiroyuki Okada, and Guillaume Thierry Sound symbolism scaffolds language development in preverbal infants. cortex, 63:196-205, 2015\nWe have included two more examples of the classless training. In addition, we have generated some demos that show the training algorithm (https : //goo. g1/xsmkFD).\nMLP(1) MLP (2) Purity (%) Association Matrix (%) 67524944 7093 X 10.9 Initial State 89 10.9 43 127 9947190 89999002 560856 03.7 44206510 18 0.5 509 99 16 0.7 4 3 0.7 2 24.1 1.0 10 Epoch 1,000 21.9 .4 X 0.4 5 6 1.2 06 4.5 618Z810 L7009002Z80 MLP2 0 1.1 X8S 222552 18 3.0 16 1.5 14 3 7.0 12 76.7 0.1 Epoch 3,000 5 55 10 72.9 0.0 39 0.2 8.6 8.4 9991 a19199 002002000P00680 MLP(2) 09.7 18 9.4 22254 16 9.0 3 3333333 14 9.1 848848 2 D 96.8 8.8 555555 10 Epoch 300,000 9.4 96.5 9.6 4 4444 444445 4 9.7 666 6Q6 9.5 9999 94999999999490 000006000060000 MLP(2)\nFigure 1: Example of the classless training using Inverted MNIST dataset\nMLP(1) MLP (2) Purity (%) Association Matrix (%) 10.9 Initial State 10.9 0.1 1.5 1.1 1.8 E5 35.0 1.3 Epoch 1,000 2.3 27.6 0.4 0.9 0.8 MLP(2) 0.0 1.0 b6neP5N4 1.6 2.9 F0000n006000 48.3 (t)d7W 1.6 Epoch 3,000 M65 8.6 1880 40.3 0.1 1.2 5a322d5wEy 0.8 <hD5 4.0 MLP( 0.0 9.4 8.0 9.1 (t)dTW 79.3 8.2 Epoch 300,000 72.5 9.9 0.5 L5.3 7.8 9.5 45 MLP(2)\nFigure 2: Example of the classless training using Random Rotated MNIST dataset"}] |
HkIQH7qel | [{"section_index": "0", "section_name": "LEARNING RECURRENT SPAN REPRESENTATIONS FOR EXTRACTIVE OUESTION ANSWERING", "section_text": "Kenton Lee*\nNew York. NY\nkentonl@cs.washington.edu\nThe reading comprehension task, that asks questions about a given evidence docu- ment, is a central problem in natural language understanding. Recent formulations of this task have typically focused on answer selection from a set of candidates pre-defined manually or through the use of an external NLP pipeline. However, Rajpurkar et al.[(2016) recently released the SQuAD dataset in which the an- swers can be arbitrary strings from the supplied text. In this paper, we focus on this answer extraction task, presenting a novel model architecture that efficiently builds fixed length representations of all spans in the evidence document with a re- current network. We show that scoring explicit span representations significantly improves performance over other approaches that factor the prediction into sep- arate predictions about words or start and end markers. Our approach improves upon the best published results of|Wang & Jiang(2016) by 5% and decreases the error ofRajpurkar et al.[s baseline by > 50%."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "A primary goal of natural language processing is to develop systems that can answer questions about the contents of documents. The reading comprehension task is of practical interest - we want computers to be able to read the world's text and then answer our questions - and, since we believe it requires deep language understanding. it has also become a flagship task in NLP research.\nA number of reading comprehension datasets have been developed that focus on answer selection from a small set of alternatives defined by annotators (Richardson et al.] 2013) or existing NLP pipelines that cannot be trained end-to-end (Hill et al.|2016] Hermann et al.2015). Subsequently, the models proposed for this task have tended to make use of the limited set of candidates, basing their predictions on mention-level attention weights (Hermann et al.]2015), or centering classi- fiers (Chen et al.| [2016), or network memories (Hill et a1.2016) on candidate locations..\nRecently,Rajpurkar et al.(2016) released the less restricted SQuAD dataset|that does not place any constraints on the set of allowed answers, other than that they should be drawn from the evidenc document.Rajpurkar et al. proposed a baseline system that chooses answers from the constituents identified by an existing syntactic parser. This allows them to prune the O(N2) answer candidates in each document of length N, but it also effectively renders 20.7% of all questions unanswerable.\nWork completed during internship at Google, New York\nTom Kwiatkowski Ankur Parikh. Dipanjan Das\n{tomkwiat, aparikh, dipanjand}@google.com"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Subsequent work by|Wang & Jiang(2016) significantly improve upon this baseline by using an end- to-end neural network architecture to identify answer spans by labeling either individual words, or the start and end of the answer span. Both of these methods do not make independence assumptions about substructures, but they are susceptible to search errors due to greedy training and decoding\nIn contrast, here we argue that it is beneficial to simplify the decoding procedure by enumerating. all possible answer spans. By explicitly representing each answer span, our model can be globally normalized during training and decoded exactly during evaluation. A naive approach to building the O(N2) spans of up to length N would require a network that is cubic in size with respect.\nto the passage length, and such a network would be untrainable. To overcome this, we present. a novel neural architecture called RASoR that builds fixed-length span representations, reusing. recurrent computations for shared substructures. We demonstrate that directly classifying each of the competing spans, and training with global normalization over all possible spans, leads to a. significant increase in performance. In our experiments, we show an increase in performance over Wang & Jiang(2016) of 5% in terms of exact match to a reference answer, and 3.6% in terms of predicted answer F1 with respect to the reference. On both of these metrics, we close the gap. between Rajpurkar et al.[s baseline and the human-performance upper-bound by > 50%.\nExtractive question answering systems take as input a question q = {qo, ..., qn} and a passage of. text p = {po, ..., Pm} from which they predict a single answer span a = (astart, aend), represented. as a pair of indices into p. Machine learned extractive question answering systems, such as the one presented here, learn a predictor function f(q, p) -> a from a training dataset of (q, p, a) triples.."}, {"section_index": "3", "section_name": "2.2 RELATED WORK", "section_text": "For the SQuAD dataset, the original paper from Rajpurkar et al.(2016) implemented a linear model. with sparse features based on n-grams and part-of-speech tags present in the question and the can. didate answer. Other than lexical features, they also used syntactic information in the form of de-. pendency paths to extract more general features. They set a strong baseline for following work. and also presented an in depth analysis, showing that lexical and syntactic features contribute mos strongly to their model's performance. Subsequent work by[Wang & Jiang(2016) use an end-to-end neural network method that uses a Match-LSTM to model the question and the passage, and uses. pointer networks (Vinyals et al.]2015) to extract the answer span from the passage. This model. resorts to greedy decoding and falls short in terms of performance compared to our model (see Sec. tion[5|for more detail). While we only compare to published baselines, there are other unpublished competitive systems on the SQuA D leaderboard, as listed in footnote[5\nA task that is closely related to extractive question answering is the Cloze task (Taylor1953). in which the goal is to predict a concealed span from a declarative sentence given a passage of. supporting text. Recently, Hermann et al.(2015) presented a Cloze dataset in which the task is. to predict the correct entity in an incomplete sentence given an abstractive summary of a news article.Hermann et al. also present various neural architectures to solve the problem. Althougl. this dataset is large and varied in domain, recent analysis by|Chen et al.(2016) shows that simple. models can achieve close to the human upper bound. As noted by the authors of the SQuAD paper. the annotated answers in the SQuAD dataset are often spans that include non-entities and can be longer phrases, unlike the Cloze datasets, thus making the task more challenging..\nAnother, more traditional line of work has focused on extractive question answering on sentences where the task is to extract a sentence from a document, given a question. Relevant datasets include datasets from the annual TREC evaluations (Voorhees & Tice2000) and WikiQA (Yang et al. 2015), where the latter dataset specifically focused on Wikipedia passages. There has been a line of interesting recent publications using neural architectures, focused on this variety of extractive question answering (Tymoshenko et al.2016,Wang et al.[2016] inter alia). These methods mode the question and a candidate answer sentence, but do not focus on possible candidate answer spans that may contain the answer to the given question. In this work, we focus on the more challenging problem of extracting the precise answer span."}, {"section_index": "4", "section_name": "3 MODEL", "section_text": "We propose a model architecture called RASoR2|illustrated in Figure 1] that explicitly computes embedding representations for candidate answer spans. In most structured prediction problems (e.g. sequence labeling or parsing), the number of possible output structures is exponential in the input\n2An abbreviation for Recurrent Span Representations, pronounced as razor\nIn order to compute these span representations, we must aggregate information from the passage. and the question for every answer candidate. For the example in Figure[1] RASoR computes an embedding for the candidate answer spans: fixed to, fixed to the, to the, etc. A naive approach for these aggregations would require a network that is cubic in size with respect to the passage length Instead, our model reduces this to a quadratic size by reusing recurrent computations for shared substructures (i.e. common passage words) from different spans..\nSince the choice of answer span depends on the original question, we must incorporate this infor- mation into the computation of the span representation. We model this by augmenting the passage word embeddings with additional embedding representations of the question..\nn this section, we motivate and describe the architecture for RASoR in a top-down manner"}, {"section_index": "5", "section_name": "3.1 SCORING ANSWER SPANS", "section_text": "The goal of our extractive question answering system is to predict the single best answer span among all candidates from the passage p, denoted as A(p). Therefore, we define a probability distribution over all possible answer spans given the question q and passage p, and the predictor function finds the answer span with the maximum likelihood:\nf(q,p) := argmax P(a q,p aEA(p)\nOne might be tempted to introduce independence assumptions that would enable cheaper decoding For example, this distribution can be modeled as (1) a product of conditionally independent distribu tions (binary) for every word or (2) a product of conditionally independent distributions (over words for the start and end indices of the answer span. However, we show in Section 5.2|that such inde pendence assumptions hurt the accuracy of the model, and instead we only assume a fixed-lengtl representation ha of each candidate span that is scored and normalized with a softmax layer (Spar score and Softmax in Figure1):\nP(a|q,p)= a E A(p a'EA(p"}, {"section_index": "6", "section_name": "3.2 RASOR: RECURRENT SPAN REPRESENTATION", "section_text": "The previously defined probability distribution depends on the answer span representations, ha When computing ha, we assume access to representations of individual passage words that have been augmented with a representation of the question. We denote these question-focused passage word embeddings as {p*,..., P*} and describe their creation in Section [3.3] In order to reuse computation for shared substructures, we use a bidirectional LSTM (Hochreiter & Schmidhuber 1997) to encode the left and right context of every p* (Passage-level BiLSTM in Figure[1). This allows us to simply concatenate the bidirectional LSTM (BiLSTM) outputs at the endpoints of a span to jointly encode its inside and outside information (Span embedding in Figure|1):\nwhere BILsTM(-) denotes a BiLSTM over its input embedding sequence and p*' is the concatenation. of forward and backward outputs at time-step i. While the visualization in Figure|1|shows a single layer BiLSTM for simplicity, we use a multi-layer BiLSTM in our experiments. The concatenated. output of each layer is used as input for the subsequent layer, allowing the upper layers to depend. on the entire passage.\nlength, and computing representations for every candidate is prohibitively expensive. However, we. exploit the simplicity of our task, where we can trivially and tractably enumerate all candidates. This. facilitates an expressive model that computes joint representations of every answer span, that can be globally normalized during learning.\nwhere FFNN(.) denotes a fully connected feed-forward neural network that provides a non-linear mapping of its input embedding, and wa denotes a learned vector for scoring the last layer of the feed-forward neural network\nBILST (astart, Aend) E A(p)"}, {"section_index": "7", "section_name": "3.3 QUESTION-FOCUSED PASSAGE WORD EMBEDDING", "section_text": "We use fixed pretrained embeddings to represent question and passage words. Therefore, in the fol- lowing discussion, notation for the words are interchangeable with their embedding representations.\nSij. exp(Sij) aij 1<j k=1 exp(Sik) n align aijqj 9i j=1\nexp( ai k=1 exp(Sik) n align aij|j 9i j=1\nPassage-independent question representation We also include a representation of the question that does not depend on the passage and is shared for all passage words..\nSimilar to the previous question representation, an attention score is computed via a dot-product. except the question word is compared to a universal learned embedding rather any particular passage word. Additionally, we incorporate contextual information with a BiLSTM before aggregating the outputs using this attention mechanism.\nThe goal is to generate a coarse-grained summary of the question that depends on word order. For. mally, the passage-independent question representation qindep is computed as follows:\n12 exp(Sj k=1 exp(sk) n indep ajq j=1\nwhere w. denotes a learned vector for scoring the last layer of the feed-forward neural network\nGiven the above model specification, learning is straightforward. We simply maximize the log likelihood of the correct answer candidates and backpropagate the errors end-to-end.\nComputing the question-focused passage word embeddings {p*, ..., P*m} requires integrating ques-. tion information into the passage. The architecture for this integration is flexible and likely depends on the nature of the dataset. For the SQuAD dataset, we find that both passage-aligned and passage independent question representations are effective at incorporating this contextual information, and experiments will show that their benefits are complementary. To incorporate these question rep resentations, we simply concatenate them with the passage word embeddings (Question-focused passage word embedding in Figure[1).\nPassage-aligned question representation In this dataset, the question-passage pairs often contain. large lexical overlap or similarity near the correct answer span. To encourage the model to exploit these similarities, we include a fixed-length representation of the question based on soft alignments. with the passage word. The alignments are computed via neural attention (Bahdanau et al.]2014) and we use the variant proposed by Parikh et al.(2016), where attention scores are dot products. between non-linear mappings of word embeddings.\n,qn} = BILSTM(q (9) Sj = Wq : FFNN(q) 1<j<n (10) exr ajk=1exp(sk) 1<j<n (11) n indep - ajqj (12) j=1\n1<j<r Sj = Wq: FFNN( exp(Si 1<j<r k=1 exp n indep j=1\nSoftmax Span score Hidden layer fixec to the the fixed to to the to the turbine turbine Span embedding Passage-level BiLSTM Question-focused fixed to the turbine passage word embedding Passage-independent question representation Question-level BiLSTM What are stators attached Passage-aligned question (1) fixed 2 representation\nFigure 1: A visualization of RASoR, where the question is \"What are the stators attached to?\" and the passage is \"...fixed to the turbine ...\". The model constructs question-focused passage word embeddings by concate nating (1) the original passage word embedding, (2) a passage-aligned representation of the question, and (3) a passage-independent representation of the question shared across all passage words. We use a BiLSTM over these concatenated embeddings to efficiently recover embedding representations of all possible spans, which are then scored by the final layer of the model."}, {"section_index": "8", "section_name": "4 EXPERIMENTAL SETUP", "section_text": "We represent each of the words in the question and document using 300 dimensional GloVe embed. dings trained on a corpus of 840bn words (Pennington et al.|2014). These embeddings cover 200k words and all out of vocabulary (OOV) words are projected onto one of 1m randomly initialized 300d embeddings. We couple the input and forget gates in our LSTMs, as described in Greff et al.. (2016), and we use a single dropout mask to apply dropout across all LSTM time-steps as proposed. byGal & Ghahramani(2016). Hidden layers in the feed-forward neural networks use rectified linear units (Nair & Hinton2010). Answer candidates are limited to spans with at most 30 words..\nTo choose the final model configuration, we ran grid searches over: the dimensionality of. the LSTM hidden states (25,50,100,200); the number of stacked LSTM layers (1,2,3); the width (50, 100, 150, 200) and depth (1,2) of the feed-forward neural networks; the dropout rate. (0, 0.1, 0.2); and the decay multiplier (0.9, 0.95, 1.0) with which we multiply the learning rate ev- ery 10k steps. The best model uses a single 150d hidden layer in all feed-forward neural networks:. 50d LSTM states; two-layer BiLSTMs for the span encoder and the passage-independent question. representation; dropout of 0.1 throughout; and a learning rate decay of 5% every 10k steps..\nAll models are implemented using TensorFlow3|and trained on the SQuAD training set using th ADAM (Kingma & Ba]2015) optimizer with a mini-batch size of 4 and trained using 10 asyn. chronous training threads on a single machine."}, {"section_index": "9", "section_name": "5 RESULTS", "section_text": "We train on the 80k (question, passage, answer span) triples in the SQuAD training set and repor. results on the 10k examples in the SQuAD development set. Due to copyright restrictions, w are currently not able to upload our models to Codalah4I which is required to run on the blin. SQuAD test set, but we are working with Rajpurkar et al.to remedy this, and this paper will be updated with test numbers as soon as possible..\nAll results are calculated using the official SQuAD evaluation script, which reports exact answer match and F1 overlap of the unigrams between the predicted answer and the closest labeled answer from the 3 reference answers given in the SQuAD development set."}, {"section_index": "10", "section_name": "5.1 COMPARISONS TO OTHER WORK", "section_text": "Our model with recurrent span representations (RASoR) is compared to all previously publishec. systems Rajpurkar et al.(2016) published a logistic regression baseline as well as human perfor- mance on the SQuAD task. The logistic regression baseline uses the output of an existing syntactic. parser both as a constraint on the set of allowed answer spans, and as a method of creating sparse. features for an answer-centric scoring model. Despite not having access to any external representa. tion of linguistic structure, RASoR achieves an error reduction of more than 50% over this baseline both in terms of exact match and F1, relative to the human performance upper bound..\nDOV LODC System EM F1 EM F1 Logistic regression baseline 39.8 51.0 40.4 51.0 Match-LSTM (Sequence) 54.5 67.7 54.8 68.0 Match-LSTM (Boundary) 60.5 70.7 59.4 70.0 RASOR 66.4 74.9 RASoR (Ensemble) 68.2 76.7 Human 81.4 91.0 82.3 91.2\nMore closely related to RASoR is the boundary model with Match-LSTMs and Pointer Networks by. Wang & Jiang(2016). Their model similarly uses recurrent networks to learn embeddings of each. passage word in the context of the question, and it can also capture interactions between endpoints. since the end index probability distribution is conditioned on the start index. However, both training and evaluation are greedy, making their system susceptible to search errors when decoding. In. contrast, RASoR can efficiently and explicitly model the quadratic number of possible answers which leads to a 14% error reduction over the best performing Match-LSTM model..\nWe also ensemble RASoR with a baseline model described in Section|5.2|that independently pre dicts endpoints rather than spans (Endpoints prediction in Table2b). By simply computing the prod uct of the output probabilities, this ensemble further increases performance to 68.2% exact-match We examine the causes of this improvement in Section6\nSince we do not have access to the test set, we also present 5-fold cross validation experiments. to demonstrate that our dev-set results are not an outcome of overfitting through hyper-parameter. selection. In this 5-fold setting, we create 5 pseudo dev/test splits from the SQuAD development. set[] We choose hyper-parameters on the basis of the pseudo dev set, and report performance on. the disjoint pseudo test set. Each of the pseudo dev sets led us to choose the same optimal model\n5As of submission, other unpublished systems are shown on the SQuAD leaderboard, including Match. LSTM with Ans-Ptr (Boundary+Ensemble), Co-attention, r-net, Match-LSTM with Bi-Ans-Ptr (Boundary), Co. attention old, Dynamic Chunk Reader, Dynamic Chunk Ranker with Convolution layer, Attentive Chunker 6We split by Wikipedia page ID and use as a development set and as a test set..\nWe split by Wikipedia page ID and use as a development set and as a test set\nDev Test System EM F1 EM F1 Logistic regression baseline 39.8 51.0 40.4 51.0 Match-LSTM (Sequence) 54.5 67.7 54.8 68.0 Match-LSTM (Boundary) 60.5 70.7 59.4 70.0 RASOR 66.4 74.9 RASoR (Ensemble) 68.2 76.7 Human 81.4 91.0 82.3 91.2\nTable 1: Exact match (EM) and span F1 on SQuAD. We are currently unable to evaluate on the blind SQuAD test set due to copyright restrictions. We confirm that we did not overfit the development set via 5-fold cross validation of the hyper-parameters, resulting in 66.0 1.0 exact match and 74.5 0.9 F1.\nhyper-parameters from a grid of 59 settings, as well as very similar training stopping points. We compute the mean and standard deviation of both evaluation metrics for these optimal models on the pseudo test set, resulting in a 66.0 1.0 exact match and 74.5 0.9 F1. These results show that our hyper-parameter selection procedure is not overfitting on the 10k SQuAD development set, and we subsequently expect that our model's performance will translate to the SQuAD test set."}, {"section_index": "11", "section_name": "5.2 MODEL VARIATIONS", "section_text": "We investigate two main questions in the following ablations and comparisons. (1) How important. are the two methods of representing the question described in Section[3.3? (2) What is the impac. of learning a loss function that accurately reflects the span prediction task?\nQuestion representations Table 2a shows the performance of RASoR when either of the twc question representations described in Section|3.3|is removed. The passage-aligned question repre sentation is crucial, since lexically similar regions of the passage provide strong signal for relevan answer spans. If the question is only integrated through the inclusion of a passage-independent rep resentation, performance drops drastically. The passage-independent question representation ove the BiLSTM is less important, but it still accounts for over 3% exact match and F1. The input oj both of these components is analyzed qualitatively in Section6\nLearning objective EM Question representation EM F1 Membership prediction 57.9 Only passage-independent 48.7 56.6 BIO sequence prediction 63.9 Only passage-aligned 63.1 71.3 Endpoints prediction 65.3 RASOR 66.4 74.9 Span prediction w/ log loss 65.2 (a) Ablation of question representations. (b) Comparisons for different learning obj\nLearning objectives Given a fixed architecture that is capable of encoding the input question passage pairs, there are many ways of setting up a learning objective to encourage the model to predict the correct span. In Table|2b] we provide comparisons of some alternatives (learned end-to. end) given only the passage-level BiLSTM from RASoR. In order to provide clean comparisons, we restrict the alternatives to objectives that are trained and evaluated with exact decoding.\nLi et al. (2016) proposed a sequence-labeling scheme that is similar to the above baseline (BIC sequence prediction in Table [2b). We follow their proposed model and learn a conditional randonr field (CRF) layer after the passage-level BiLSTM to model transitions between the different labels At prediction time, a valid span can be recovered in linear time using Viterbi decoding, with harc transition constraints to enforce a single contiguous output.\nWe also consider a model that independently predicts the two endpoints of the answer span (End- points prediction in Table[2b). This model uses the softmax loss over passage words during learning When decoding, we only need to enforce the constraint that the start index is no greater than the end index. Without the interactions between the endpoints, this can be computed in linear time. Note that this model has the same expressivity as RASoR if the span-level FFNN were removed.\nLastly, we compare with a model using the same architecture as RASoR but is trained with a binar logistic loss rather than a softmax loss over spans (Span prediction w/ logistic loss in Table|2b)\nThe trend in Table 2b|shows that the model is better at leveraging the supervision as the learning. objective more accurately reflects the fundamental task at hand: determining the best answer span.\nThe simplest alternative is to consider this task as binary classification for every word (Membership prediction in Table 2b). In this baseline, we optimize the logistic loss for binary labels indicating whether passage words belong to the correct answer span. At prediction time, a valid span can be recovered in linear time by finding the maximum contiguous sum of scores..\nFirst, we observe general improvements when using labels that closely align with the task. Fo. example, the labels for membership prediction simply happens to provide single contiguous span in the supervision. The model must consider far more possible answers than it needs to (the power se. of all words). The same problem holds for BIO sequence prediction- the model must do additiona. work to learn the semantics of the BIO tags. On the other hand, in RASoR, the semantics of a. answer span is naturally encoded by the set of labels..\nSecond, we observe the importance of allowing interactions between the endpoints using the span level FFNN. RASoR outperforms the endpoint prediction model by 1.1 in exact match, The interac-. tion between endpoints enables RASoR to enforce consistency across its two substructures. While this does not provide improvements for predicting the correct region of the answer (captured by the. F1 metric, which drops by 0.2), it is more likely to predict a clean answer span that matches human. judgment exactly (captured by the exact-match metric)..\n0.8 4k 0.6 0.4 RaSoR F1 UZZZGold Lengths 2k Endpoint F1. Endpoint Lengths RaSoR EM XX RaSoR Lengths 0.2 Endpoint EM. 0.5k 0.0 1 2 3 4 5 6 7 8 >8 Answer Length\n0.8 4k 0.6 0.4 RaSoR F1 UZZ Gold Lengths 2k Frenneney Endpoint F1 Endpoint Lengths RaSoR EM XX RaSoR Lengths 0.2 Endpoint EM 0.5k VX 0.0 1 2 3 4 5 6 7 8 >8 Answer Length\nFigure 2: F1 and Exact Match accuracy of RASoR and the endpoint predictor over different predictions lengths along with the distribution of both models' prediction lengths and the gold answer lengths..\n'south of today's line Merwede-Oude Maas to the North Sea and formed ar. archipelago-like estuary with Waal and Lek. This system of numerous bays, estuary-like. extended rivers, many islands and constant changes of the coastline, is hard to imagine. today. From 1421 to 1904, the Meuse and Waal merged further upstream at Gorincher. to form Merwede. For flood protection reasons, the Meuse was separated from the Waa. through a lock and diverted into a new outlet called ''Bergse Maas\", then Amer and ther flows into the former bay Hollands Diep.\nIn this prediction, we can see that both the start 'south of ...' and the end '...Hollands Diep' have a reasonable answer type. However, the endpoints predictor has failed to model the fact that they cannot resonably be part of the same answer, a common error case. The endpoints predictor predicts 514 answers with > 25 more words than the gold answer, but the span classifier never does this.\nFigure |2 shows how the performances of RASoR and the endpoint predictor introduced in Sec. tion5.2 degrade as the lengths of their predictions increase. The endpoint predictor underpredicts single word answer spans, while overpredicting answer spans with more than 8 words.\nSince the endpoints predictor does not explicitly model the interaction between the start and end of any given answer span, it is susceptible to choosing the span start and end points from separate answer candidates. For example, consider the following endpoints prediction that is most different in length from a correct span classification. Here, the span classifier correctly answers the question Where did the Meuse flow before the flood?' with 'North Sea' but the endpoints prediction is:\nWhich people brought forward one of the earliest examples of Civil Disobedience ? What does civil disobedience protest against ? wwns-d one east mssssse Sem hynnouq anoue aaannst fhe Bntsh fhe 1917 Rernnron one fhe nuem sem peepde have aaannst what They deenm ae unnnar\nFigure 3: Attention masks from RASoR. Top predictions are 'Egyptians', 'Egyptians against the British', and. British' in the first example and 'unjust laws'. 'what they deem to be unjust laws'. and 'laws' in the second\nTable 3: Example questions and their most attended words in the passage-independent question representatior (Equation[11). These examples have the greatest attention (normalized by the question length) in the develop ment set. The attention mechanism typically seeks words in the question that indicate the answer type.\nFigure 3 shows attention masks for both of RASoR's question representations. The passage. independent question representation pays most attention to the words that could attach to the an swer in the passage ('brought', 'against') or describe the answer category (people'). Meanwhile. the passage-aligned question representation pays attention to similar words. The top predictions fo. both examples are all valid syntactic constituents, and they all have the correct semantic category. However, RASoR assigns almost as much probability mass to it's incorrect third prediction 'British. as it does to the top scoring correct prediction 'Egyptian'. This showcases a common failure case. for RASoR, where it can find an answer of the correct type close to a phrase that overlaps with the. question -- but it cannot accurately represent the semantic dependency on that phrase..\nA significant architectural difference from other neural models for the SQuAD dataset, such as. Wang & Jiang(2016), is the use of the question-independent passage representation (Equation[12) Table3|shows examples in the development set where the model paid the most attention to a single. word in the question. The attention mechanism tends to seek words in the question that indicate the answer type, e.g. 'language' from the question: 'What language did the Court of Justic accept. ..' This pattern provides insight for the necessity of using both question representations, since the answer type information is orthogonal to passage alignment information..\nWhich people brought forward one of the earliest examples of Civil isobedienced 2 What does civil disobedience protest against ? wwns-d one eaast nssnssmee Wes qybnouq anout Aq aaannnt fhe Brish 661 one fne nuem peeole hvee whnt rhey renm ae"}, {"section_index": "12", "section_name": "7 CONCLUSION", "section_text": "We have shown a novel approach for perform extractive question answering on the SQuAD datasel. by explicitly representing and scoring answer span candidates. The core of our model relies on a. recurrent network that enables shared computation for the shared substructure across span candi dates. We explore different methods of encoding the passage and question, showing the benefits of. including both passage-independent and passage-aligned question representations. While we show. that this encoding method is beneficial for the task, this is orthogonal to the core contribution of. efficiently computing span representation. In future work, we plan to explore alternate architectures. that provide input to the recurrent span representations.."}, {"section_index": "13", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, KyungHyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. 2014.\nDanqi Chen, Jason Bolton, and Christopher D. Manning. A thorough examination of the cnn/daily mail reading comprehension task. In Proceedings of ACL, 2016..\nJeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for wor representation. In Proceedings of EMNLP, 2014.\nMatthew Richardson, Christopher JC Burges, and Erin Renshaw. Mctest: A challenge dataset fo. the open-domain machine comprehension of text. In Proceedings of EMNLP, 2013.\nYarin Gal and Zoubin Ghahramani. A theoretically grounded application of dropout in recurrent neural networks. Proceedings of NIPS, 2016.\nFelix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Reading children's books with explicit memory representations. In Proceedings of ICLR, 2016.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. Proceedings o ICLR, 2015.\nAnkur P Parikh, Oscar Tackstrom, Dipanjan Das, and Jakob Uszkoreit. A decomposable attention model for natural language inference. In Proceedings of EMNLP, 2016..\nPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100, O00+ questions for machine comprehension of text. In Proceedings of EMNLP, 2016.\nOriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In Proceedings of NIPS 2015.\nBingning Wang, Kang Liu, and Jun Zhao. Inner attention based recurrent neural networks for answe selection. In Proceedings of ACL, 2016..\nShuohang Wang and Jing Jiang. Machine comprehension using match-lstm and answer pointer arXiv preprint arXiv:1608.07905. 2016"}] |
BJAA4wKxg | [{"section_index": "0", "section_name": "A CONVOLUTIONAL ENCODER MODEL FOR NEURAI MACHINE TRANSLATION", "section_text": "Jonas Gehring, Michael Auli, David Grangier, Yann N. Dauphin\nThe prevalent approach to neural machine translation relies on bi-directional. LSTMs to encode the source sentence. In this paper we present a faster and sim-. pler architecture based on a succession of convolutional layers. This allows to. encode the entire source sentence simultaneously compared to recurrent networks for which computation is constrained by temporal dependencies. On WMT'16. English-Romanian translation we achieve competitive accuracy to the state-of-the-. art and we outperform several recently published results on the WMT'15 English. German task. Our models obtain almost the same accuracy as a very deep LSTM. setup on WMT'14 English-French translation. Our convolutional encoder speeds. up CPU decoding by more than two times at the same or higher accuracy as a. strong bi-directional LSTM baseline."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Neural machine translation (NMT) is an end-to-end approach to machine translation (Sutskeve t al., 2014). The most successful approach to date encodes the source sentence with a bi-direction: ecurrent neural network (RNN) into a variable length representation and then generates the trans ation left-to-right with another RNN where both components interface via a soft-attention mecha ism (Bahdanau et al., 2015; Luong et al., 2015a; Bradbury & Socher, 2016; Sennrich et al., 2016b The recurrent networks are typically parameterized as long short term memory networks (LSTM. Iochreiter et al. 1997) or gated recurrent units (GRU; Cho et al. 2014), often with residual or ski. onnections (Wu et al., 2016; Zhou et al., 2016) to enable stacking of several layers ($2)..\nThere have been several attempts to use convolutional encoder models for neural machine translation. in the past but they were either only applied to rescoring n-best lists of classical systems (Kalchbren- ner & Blunsom, 2013) or were not competitive to recurrent alternatives (Cho et al., 2014a). This is despite several attractive properties of convolutional networks. For example, convolutional networks operate over a fixed-size window of the input sequence which enables the simultaneous computation of all features for a source sentence. This contrasts with RNNs which maintain a hidden state of the entire past that prevents parallel computation within a sequence..\nFurthermore, a succession of convolutional layers provides a shorter path to capture relationships. between elements of a sequence compared to recurrent networks.' This also eases learning because. the resulting tree-structure applies a fixed number of non-linearities compared to a recurrent neural network. Because processing is bottom-up, all words undergo the same number of transformations. whereas for recurrent networks the first word is over-processed and the last word is transformed only Once.\nIn this paper we show that an architecture based on convolutional layers is very competitive tc recurrent encoders. We investigate simple average pooling as well as parameterized convolutions as an alternative for recurrent encoders and enable very deep convolutional encoders by using residual connections (He et al., 2015; $3).\nWe experiment on several standard datasets and compare our approach to variants of recurrent en. coders such as uni-directional and bi-directional LSTMs. On WMT'16 English-Romanian transla\n' For kernel width k and sequence length n we require max. forwards on a succession of stacke. convolutional networks compared to n forwards with a recurrent network.\ntion we achieve accuracy that is very competitive to the current state-of-the-art single system result We perform competitively on WMT'15 English-German, and nearly match the performance of the best WMT'14 English-French system based on a deep LSTM setup when comparing on a commonly used subset of the training data (Zhou et al. 2016; $4, $5)..\nThe general architecture of the models in this work follows the encoder-decoder approach with soft attention first introduced in Bahdanau et al. (2015). A source sentence x = (x1,..., xm) of m words is processed by an encoder which outputs a sequence of states z =.\nThe decoder is an RNN network that computes a new hidden state si+1 based on the previous state si, an embedding gi of the previous target language word yi, as well as a conditional input c; derived from the encoder output z. We use LSTMs (Hochreiter & Schmidhuber, 1997) for all decoder networks whose state s; comprises of a cell vector and a hidden vector hi which is output by the LSTM at each time step. We input c, into the LSTM by concatenating it to gi..\nThe translation model computes a distribution over the V possible target words yi+1 by transforming the LSTM output h, via a linear layer with weights W, and bias bo:.\np(Yi+1|Y1,..., Yi,x) = softmax(W,hi+1 + bo) E R\nexp d= Wahi+bd+ gi aij m1 exp (d zt) C: = ai j=1\nIn preliminary experiments, we did not find the MLP attention of Bahdanau et al. (2015) to perform significantly better in terms of BLEU nor perplexity. However. we found the dot-product attention to be more favorable in terms of training and evaluation speed.\nWe use bi-directional LSTMs to implement recurrent encoders similar to Zhou et al. (2016) whicl achieved some of the best WMT14 English-French results reported to date. First, each word of the. input sequence x is embedded in distributional space resulting in e = (e1, . . . , em). The embedding. are input to two stacks of uni-directional RNNs where the output of each layer is reversed before being fed into the next layer. The first stack takes the original sequence while the second takes the. reversed input sequence; the output of the second stack is reversed so that the final outputs of the. stacks align. Finally, the top-level hidden states of the two stacks are concatenated and fed into a linear layer to yield z. We denote this encoder architecture as BiLSTM.."}, {"section_index": "2", "section_name": "3.1 POOLING ENCODER", "section_text": "A simple baseline for non-recurrent encoders is the pooling model described in Ranzato et al. (2015). which simply averages the embeddings of k consecutive words. Averaging word embeddings does not convey positional information besides that the words in the input are somewhat close to each. other. As a remedy, we add position embeddings to encode the absolute position of each source word within a sentence. Each source embedding e; therefore contains a position embedding lj. as well as the word embedding ws. Position embeddings have also been found helpful in mem-. ory networks for question-answering and language modeling (Sukhbaatar et al., 2015). Similar. to the recurrent encoder ($2), the attention scores a; are computed from the pooled representa.\nThe conditional input c, at time i is computed via a simple dot-product style attention mecha nism (Luong et al., 2015a). Specifically, we transform the decoder hidden state h; by a linear layer with weights Wa and ba to match the size of the embedding of the previous target word gi and then sum the two representations to yield d. Conditional input c; is a weighted sum of attention scores a; E Rm and encoder outputs z. The attention scores a; are determined by a dot product between h, with each zj, followed by a softmax over the source sequence:\n<p> Die Katze schliet ein <p> <p> Die Katze schliet ein <p> Convolutional Encoder Networks Attention Weights Conditional Input Computation LSTM Decoder LSTM the cat fell\nFigure 1: Neural machine translation model with single-layer convolutional encoder networks CNN-a is on the left and CNN-c is at the right. Embedding layers are not shown..\nt1ons zi. however. the conditional input ( : 1s a weghted sum ot the embeddings not Zj, i.e\nThe input sequence is padded prior to pooling such that the encoder output matches the input lengtl z = x]. We set k to 5 in all experiments as Ranzato et al. (2015).\nA straightforward extension of pooling is to learn the kernel in a convolutional neural network. (CNN). The encoder output z; contains information about a fixed-sized context depending on the. kernel width k but the desired context width may vary. This can be addressed by stacking several. layers of convolutions followed by non-linearities: additional layers increase the total context size while non-linearities can modulate the effective size of the context as needed. For instance, stacking. 5 convolutions with kernel width k = 3 results in an input field of 11 words, i.e., each output. depends on 11 input words, and the non-linearities allow the encoder to exploit the full input field.. or to concentrate on fewer words as needed.\nThe final encoder consists of two stacked convolutional networks (Figure 1): CNN-a produces th encoder output z; to compute the attention scores ai, while the conditional input c; to the decoder is computed by summing the outputs of CNN-c,\nm z; = CNN-a(e)j CN j=1\nZj = CNN-a(e).\nIn practice, we found that two different CNNs resulted in better perplexity as well as BLEU com pared to using a single one ($5.3). We also found this to perform better than directly summing the e; without transformation as for the pooling model.."}, {"section_index": "3", "section_name": "3.3 RELATED WORK", "section_text": "There are several past attempts to use convolutional encoders for neural machine translation, how ever, to our knowledge none of them were able to match the performance of recurrent encoders Kalchbrenner & Blunsom (2013) introduce a convolutional sentence encoder in which a multi-laye1 CNN generates a fixed sized embedding for a source sentence, or an n-gram representation followed by transposed convolutions for directly generating a per-token decoder input. The latter requires\n[k/2] m 1 Zj = ej+t Ci = aijej ej =Wj+ k t=-[k/2] j=1\nTo ease learning for deep encoders, we add residual connections from the input of each convolution to the output and then apply the non-linear activation function to the output (tanh; He et al., 2015); the non-linearities are therefore not 'bypassed'. Multi-layer CNNs are constructed by stacking sev- eral blocks on top of each other. The CNNs do not contain pooling layers which are commonly used for down-sampling, i.e., the full source sequence length will be retained after the network has been applied. Similar to the pooling model, the convolutional encoder uses position embeddings..\nm Ci. aii CNN-c(e); j=1\nConcurrently to our work, Kalchbrenner et al. (2016) have introduced convolutional translation mod-. els without an explicit attention mechanism but their approach does not yet result in state-of-the-art. accuracy. Lamb & Xie (2016) also proposed a multi-layer CNN to generate a fixed-size encoder representation but their work lacks quantitative evaluation in terms of BLEU. Meng et al. (2015). and Tu et al. (2015) applied convolutional models to score phrase-pairs of traditional phrase-based and dependency-based translation models. Convolutional architectures have also been successful in. language modeling but so far failed to outperform LSTMs (Pham et al., 2016).."}, {"section_index": "4", "section_name": "4.1 DATASETS", "section_text": "We evaluate different encoders and ablate architectural choices on a small dataset from the German English machine translation track of IWsLT 2014 (Cettolo et al., 2014) with a similar setting to Ranzato et al. (2015). Unless otherwise stated, we restrict training sentences to have no more than 175 words; test sentences are not filtered. This is a higher threshold compared to other publica. tions but ensures proper training of the position embeddings for non-recurrent encoders; the length. threshold did not significantly effect recurrent encoders. Length filtering results in 167K sentence pairs and we test on the concatenation of tst2010, tst2011, tst2012, tst2013 and dev2010 comprising 6948 sentence pairs.2 Our final results are on three major WMT tasks:.\nWMT'16 English-Romanian. We use the same data and pre-processing as Sennrich et al. (2016b and train on 2.8M sentence pairs.3 Our model is word-based instead of relying on byte-pair encoding. (Sennrich et al., 2016a). We evaluate on newstest2016.\nWMT'15 English-German. We use all available parallel training data, namely Europarl v7, Com mon Crawl and News Commentary v10 and apply the standard Moses tokenization to obtain 3.9M sentence pairs (Koehn et al., 2007). We report results on newstest2015.\nWMT'14 English-French. We use a commonly used subset of 12M sentence pairs (Schwenk 2014), and remove sentences longer than 150 words. This results in 10.7M sentence-pairs for train. ing. Results are reported on ntst14.\nA small subset of the training data serves as validation set (5% for IWSLT'14 and 1% for WMT). for early stopping and learning rate annealing ($4.3). For IWSLT'14, we replace words that occur. fewer than 3 times with a <unk> symbol, which results in a vocabulary of 24158 English and 35882 German word types. For WMT datasets, we retain 200K source and 80K target words. For English. French only, we set the target vocabulary to 30K types to be comparable with previous work."}, {"section_index": "5", "section_name": "4.2 MODEL PARAMETERS", "section_text": "We use 512 hidden units for both recurrent encoders and decoders. We reset the decoder hidden states to zero between sentences. For the convolutional encoder, 512 hidden units are used for each layer in CNN-a, while layers in CNN-c contain 256 units each. All embeddings, including the output produced by the decoder before the final linear layer, are of 256 dimensions. On the WMT. corpora, we find that we can improve the performance of the bi-directional LSTM models (BiLSTM) by using 512-dimensional word embeddings.\nModel weights are initialized from a uniform distribution within [-0.05, 0.05]. For convolutiona layers, we use a uniform distribution of [-kd-0.5, kd-0.5], where k is the kernel width (we use 3.\nWefollowedthepre-processing Of https://github.com/rsennrich/wmt16-scripts/ blob/master/sample/preprocess.sh and added the back-translated data from http://data. statmt.org/rsennrich/wmt16 backtranslations/en-ro.\nthe length of the translation prior to generation and both models were evaluated by rescoring the. output of an existing translation system. Cho et al. (2014a) propose a gated recursive CNN which is repeatedly applied until a fixed-size representation is obtained but the recurrent encoder achieves higher accuracy. In follow-up work, the authors improved the model via a soft-attention mechanism. but did not re-consider convolutional encoder models (Bahdanau et al., 2015)..\nDifferent to the other datasets, we lowercase the training data and evaluation with case-insensitive BLEI\nthroughout this work) and d is the input size for the first layer and the number of hidden units for subsequent layers (Collobert et al., 2011b). For CNN-c, we transform the input and output with a linear layer each to match the smaller embedding size. The model parameters were tuned on IWSLT'14 and cross-validated on the larger WMT corpora."}, {"section_index": "6", "section_name": "4.3 OPTIMIZATION", "section_text": "Recurrent models are trained with Adam as we found them to benefit from aggressive optimization. We use a step width of 3.125. 10-4 and early stopping based on validation perplexity (Kingma & Ba. 2014). For non-recurrent encoders, we obtain best results with stochastic gradient descent (SGD and annealing: we use a learning rate of O.1 and once the validation perplexity stops improving, we reduce the learning rate by an order of magnitude each epoch until it falls below 10-4\nFor all models, we use mini-batches of 32 sentences for IWsLT'14 and 64 for WMT. We use. truncated back-propagation through time to limit the length of target sequences per mini-batch to 25 words. Gradients are normalized by the mini-batch size. We re-normalize the gradients if their norm exceeds 25 (Pascanu et al., 2013). Gradients of convolutional layers are scaled by sqrt(dim(input))-1 similar to Collobert et al. (2011b). We use dropout on the embeddings and decoder outputs h; with a rate of 0.2 for IWSLT'14 and 0.1 for WMT (Srivastava et al., 2014). All. models are implemented in Torch (Collobert et al., 2011a) and trained on a single GPU.."}, {"section_index": "7", "section_name": "4.4 EVALUATION", "section_text": "We report accuracy of single systems by training several identical models with different random. seeds (5 for IWSLT'14, 3 for WMT) and pick the one with the best validation perplexity for final BLEU evaluation. Translations are generated by a beam search and we normalize log-likelihood scores by sentence length. On IWSLT'14 we use a beam width of 10 and for WMT models we. tune beam width and word penalty on a separate test set, that is newsdev2016 for WMT'16 English-. Romanian, newstest2014 for WMT'15 English-German and ntst1213 for WMT'14 English-French.4. The word penalty adds a constant factor to log-likelihoods, except for the end-of-sentence token.\nPrior to scoring the generated translations against the respective references, we perform unknowr word replacement based on attention scores (Jean et al., 2015). Unknown words are replaced by looking up the source word with the maximum attention score in a pre-computed dictionary. If the dictionary contains no translation, then we simply copy the source word. Dictionaries were extractec. from the aligned training data that was aligned with fast_a1ign (Dyer et al., 2013). Each source word is mapped to the target word it is most frequently aligned to.."}, {"section_index": "8", "section_name": "5 RESULTS", "section_text": "We first compare recurrent and non-recurrent encoders in terms of perplexity and BLEU on IWSLT'14 with and without position embeddings ($3.1) and include a phrase-based system (Koehn et al., 2007). Table 1 shows that a single-layer convolutional model with position embeddings (Con- volutional) can outperform both a uni-directional LSTM encoder (LSTM) as well as a bi-directional LSTM encoder (BiLSTM). Next, we increase the depth of the convolutional encoder. We choose a good setting by independently varying the number of layers in CNN-a and CNN-c between 1 and 10 and obtained best validation set perplexity with six layers for CNN-a and three layers for CNN-c.\nSpecifically, we select a beam from {5, 10} and a word penalty from {0, -0.5, -1, -1.5}\nFor convolutional encoders with stacked CNN-c layers we noticed for some models that the attention maxima were consistently shifted by one word. We determine this per-model offset on the above. mentioned development sets and correct for it. Finally, we compute case-sensitive tokenized BLEU except for WMT'16 English-Romanian where we use detokenized BLEU to be comparable with Sennrich et al. (2016b).5\nSystem/Encoder BLEU BLEU PPL words + pos words words + pos Phrase-based 28.4 LSTM 27.4 27.3 10.8 BiLSTM 29.7 29.8 9.9 Pooling 26.1 19.7 11.0 Convolutional 29.9 20.1 9.1 Deep Convolutional 6/3 30.4 25.2 8.9\nThis configuration outperforms BiLSTM by 0.7 BLEU (Deep Convolutional 6/3). We investigate depth in the convolutional encoder more in $5.3..\nAmong recurrent encoders, the BiLSTM is 2.3 BLEU better than the uni-directional version. The. simple pooling encoder which does not contain any parameters is only 1.3 BLEU lower than a. uni-directional LSTM encoder and 3.6 BLEU lower than BiLSTM. The results without position em-. beddings (words) show that position information is crucial for convolutional encoders. In particular. for shallow models (Pooling and Convolutional), whereas deeper models are less effected. Recurrent. encoders do not benefit from explicit position information because this information can be naturally extracted through the sequential computation.\nWhen tuning model settings, we generally observe good correlation between perplexity and BLEU However, for convolutional encoders perplexity gains translate to smaller BLEU improvements com pared to recurrent counterparts (Table 1). We observe a similar trend on larger datasets."}, {"section_index": "9", "section_name": "5.2 EVALUATION ON WMT CORPORA", "section_text": "On WMT'15 English to German, we compare to a BiLSTM baseline and prior work: Jean et al. (2015) introduce a large output vocabulary; the decoder of Chung et al. (2016) operates on the character-level; Yang et al. (2016) uses LSTMs instead of GRUs and feeds the conditional input to the output layer as well as to the decoder.\nOur single-layer BiLSTM baseline performs competitively compared to prior work and a two-layei BiLSTM performs about 0.4 BLEU better at 23.6 BLEU. Previous work also used multi-layer setups e.g., Chung et al. (2016) has two layers both in the encoder and the decoder with 1024 hidden units and Yang et al. (2016) use 1000 hidden units per LSTM. We use 512 hidden units for both LSTM and convolutional encoders. A single-layer CNN encoder (Convolutional) achieves 22.0 BLEU which is significantly lower than the two-layer BiLSTM. However, adding additional layers (Deep Convolutional 8/4) achieves the same accuracy as the two-layer BiLSTM and a 15 layer CNN-a outperforms it by 0.7 BLEU (Deep Convolutional 15/5). The latter performs competitively to the best published results which use decoder improvements that may benefit our setup as well.\nTable 1: Accuracy of encoders with position features (words + pos) and without (words) in terms of BLEU and perplexity (PPL) on IWSLT'14 German to English translation; results include unknown. word replacement. Deep Convolutional 6/3 is the only multi-layer configuration, more layers for the LSTMs did not improve accuracy on this dataset..\nNext, we evaluate the BiLSTM encoder and the convolutional encoder architecture on three larger. tasks and compare against previously published results. On WMT'16 English-Romanian translation we compare to Sennrich et al. (2016b), the winning single system entry for this language pair. Their. model consists of a bi-directional GRU encoder, a GRU decoder and MLP-based attention. They use byte pair encoding (BPE) to achieve open-vocabulary translation and dropout in all components of the neural network to achieve 28.1 BLEU; we use the same pre-processing but no BPE ($4)..\nThe results (Table 2) show that a deep convolutional encoder performs competitively to the state. of the art on this dataset (Sennrich et al., 2016b). Our bi-directional LSTM encoder baseline is. 0.7 BLEU lower than Sennrich et al. (2016b) but uses only 512 hidden units compared to 1024.. A single-layer convolutional encoder with embedding size 256 performs very competitively at 27.1. BLEU which is only O.3 BLEU below the BiLSTM baseline. Increasing the number of convolutional. lavers to 8 in CNN-a and 4 in CNN-c achieves 27.8 BLEU which performs well above this baseline\nWMT'16 English-Romanian. Encoder Vocabulary BLEU Sennrich et al. (2016b) BiGRU BPE 90K 28.1 Single-layer decoder BiLSTM 80K 27.4 Convolutional 80K 27.1 Deep Convolutional 8/4 80K 27.8 WMT'15 English-German Encoder Vocabulary BLEU Jean et al. (2015) RNNsearch-LV BiGRU 500K 22.4 Chung et al. (2016) BPE-Char BiGRU Char 500 23.9 Yang et al. (2016) RNNSearch + UNK replace BiLSTM 50K 24.3 BiLSTM 50K + recurrent attention 25.0 Single-layer decoder BiLSTM 80K 23.2 2-layer BiLSTM 80K 23.6 Convolutional 80K 22.0 Deep Convolutional 8/4 80K 23.6 Deep Convolutional 15/5 80K 24.3 WMT'14 English-French (12M) Encoder Vocabulary BLEU Bahdanau et al. (2015) RNNsearch BiGRU 30K 28.5 Luong et al. (2015b) Single LSTM 6-layer LSTM 40K 32.7 Jean et al. (2014) RNNsearch-LV BiGRU 500K 34.6 Zhou et al. (2016) Deep-Att Deep BiLSTM 30K 35.9 Single-layer decoder BiLSTM 30K 34.6 Deep Convolutional 8/4 30K 34.6 Two-layer decoder 2-layer BiLSTM 30K 35.3 Deep Convolutional 20/5 30K 35.7\nTable 2: Accuracy on three WMT tasks, including results published in previous work. For deep convolutional encoders, we include the number of layers in CNN-a and CNN-c, respectively..\nFinally, we evaluate on the larger WMT'14 English-French corpus. On this dataset the recurrent architectures benefit from an additional layer both in the encoder and the decoder. For a single layer decoder, a deep convolutional encoder matches the BiLSTM accuracy and for a two-layer. decoder, our very deep convolutional encoder with up to 20 layers outperforms the BiLSTM by 0.4. BLEU. It has 40% fewer parameters than the BiLSTM due to the smaller embedding sizes. We alsc. outperform several previous systems, including the very deep encoder-decoder model proposed by. Luong et al. (2015a). Our best result is just 0.2 BLEU below Zhou et al. (2016) who use a very deep LSTM setup with a 9-layer encoder, a 7-layer decoder, shortcut connections and extensive regularization with dropout and L2 regularization..\nWe next motivate our design of the convolutional encoder ($3.2). We use the smaller IWsLT'14 German-English setup without unknown word replacement to enable fast experimental turn-around BLEU results are averaged over three training runs initialized with different seeds..\nFigure 2 shows accuracy for a different number of layers of both CNNs with and without residua. connections. Our first observation is that computing the conditional input c; directly over embed-. dings e (line \"without CNN-c') is already working well at 28.3 BLEU with a single CNN-a layer. and at 29.1 BLEU for CNN-a with 7 layers (Figure 2a). Increasing the number of CNN-c layers is. beneficial up to three layers and beyond this we did not observe further improvements. Similarly. increasing the number of layers in CNN-a beyond six does not increase accuracy on this relatively. small dataset. In general, choosing two to three times as many layers in CNN-a as in CNN-c is a. good rule of thumb. Without residual connections, the model fails to utilize the increase in modeling. power from additional layers, and performance drops significantly for deeper encoders (Figure 2b)..\n30 30 29.5 29.5 2 BEEU 29 29 28.5 without CNN-c 28.5 1-layer CNN-c x 2-layer CNN-c 1-layer CNN-c, no res 3-layer CNN-c 2-layer CNN-c, no res. 4-Iayer CNN-c 3-layer CNN-c, no res. 28 28 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 Number of Layers in CNN-a Number of Layers in CNN-a (a) With residual connections (b) Without residual connections\nFigure 2: Effect of encoder depth on IWSLT'14 with and without residual connections. The x-axis varies the number of layers in CNN-a and curves show different CNN-c settings.\nEncoder Words/s BLEU BiLSTM 139.7 22.4 Deep Conv. 6/3 187.9 23.1\n(a) IWSLT'14 German-English generation speec on tst2013 with beam size 10.\nTable 3: Generation speed in source words per second on a single CPU core\nOur convolutional architecture relies on two sets of networks, CNN-a for attention score compu. tation a; and CNN-c for the conditional input c; to be fed to the decoder. We found that using. the same network for both tasks, similar to recurrent encoders, resulted in poor accuracy of 22.9 BLEU. This compares to 28.5 BLEU for separate single-layer networks, or 28.3 BLEU when ag. gregating embeddings for cy. Increasing the number of layers in the single network setup did not. help. Figure 2(a) suggests that the attention weights (CNN-a) need to integrate information from a. wide context which can be done with a deep stack. At the same time, the vectors which are averaged (CNN-c) seem to benefit from a shallower, more local representation closer to the input words. Two. stacks are an easy way to achieve these contradicting requirements..\nIn Appendix A we visualize attention scores and find that alignments for CNN encoders are less sharp compared to BiLSTMs, however, this does not affect the effectiveness of unknown word replacement once we adjust for shifted maxima. In Appendix B we investigate whether deep convo lutional encoders are required for translating long sentences and observe that even relatively shallov encoders perform well on long sentences.\nFor training, we use the fast CuDNN LSTM implementation for layers without attention and ex periment on IWSLT'14 with batch size 32. The single-layer BiLSTM model trains at 4300 target words/second, while as the 6/3 deep convolutional encoder compares at 5500 words/second on ar NVidia Tesla M40 GPU. We do not observe shorter overall training time since SGD converges slower than Adam which we use for BiLSTM models.\nWe measure generation speed on an Intel Haswell CPU clocked at 2.50GHz with a single thread. for BLAS operations. We use vocabulary selection which can speed up generation by up to a factor. of ten at no cost in accuracy via making the time to compute the final output layer negligible (Mj. et al., 2016; L'Hostis et al., 2016). This shifts the focus from the efficiency of the encoder to the efficiency of the decoder. On IWSLT'14 (Table 3a) the convolutional encoder increases the speed of.\nEncoder Words/s BLEU 2-layer BiLSTM 109.9 23.6 Deep Conv. 8/4 231.1 23.7 Deep Conv. 15/5 203.3 24.0\n(b) WMT'15 English-German generation speed on newstest2015 with beam size 5.\nthe overall model by a factor of 1.35 compared to the BiLSTM encoder while improving accuracy. by 0.7 BLEU. In this setup both encoders models have the same hidden layer and embedding sizes\nOn the larger WMT'15 English-German task (Table 3b) the convolutional encoder speeds up gener- ation by 2.1 times compared to a two-layer BiLSTM. This corresponds to 231 source words/second with beam size 5. Our best model on this dataset generates 203 words/second but at slightly lower accuracy compared to the full vocabulary setting in Table 2. The recurrent encoder uses larger em beddings than the convolutional encoder which were required for the models to match in accuracy.\nThe smaller embedding size is not the only reason for the speed-up. In Table 3a (a), we compare. a Conv 6/3 encoder and a BiLSTM with equal embedding sizes. The convolutional encoder is stil 1.34x faster (at 0.7 higher BLEU) although it requires roughly 1.6x as many FLOPs. We believe. that this is likely due to better cache locality for convolutional layers on CPUs: an LSTM witl. fused gates requires two big matrix multiplications with different weights as well as additions multiplications and non-linearities for each source word, while the output of each convolutiona layer can be computed as whole with a single matrix multiply..\nFor comparison, the quantized deep LSTM-based model in Wu et al. (2016) processes 106.4 words/second for English-French on a CPU with 88 cores and 358.8 words/second on a custom TPU chip. The optimized RNNsearch model and C++ decoder described by Junczys-Dowmunt et al. (2016) translates 265.3 words/s on a CPU with a similar vocabulary selection technique, com- puting 16 sentences in parallel, i.e., 16.6 words/s on a single core.\nWe introduced a simple encoder model for neural machine translation based on convolutional net works. This approach is more parallelizable than recurrent networks and provides a shorter path to capture long-range dependencies in the source. We find it essential to use source position embed- dings as well as different CNNs for attention score computation and conditional input aggregation.\nOur experiments show that convolutional encoders perform on par or better than baselines basec. on bi-directional LSTM encoders. In comparison to other recent work, our deep convolutional en coder is very competitive to the best published results to date (WMT'16 English-Romanian) whicl are obtained with significantly more complex models (WMT'14 English-French) or stem from im provements that are orthogonal to our work (WMT'15 English-German). Our architecture alsc. leads to large generation speed improvements: translation models with our convolutional encoder. can translate twice as fast as strong baselines with bi-directional recurrent encoders..\nFuture work includes better training to enable faster convergence with the convolutional encoder t better leverage the higher processing speed. Our fast architecture is interesting for character leve encoders where the input is significantly longer than for words. Also, we plan to investigate the effectiveness of our architecture on other sequence-to-sequence tasks, e.g. summarization, con stituency parsing, dialog modeling."}, {"section_index": "10", "section_name": "ACKNOWLEDGMENTS", "section_text": "We would like to thank Sumit Chopra and Marc'Aurelio Ranzato for helpful discussions related t this work."}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In Proc. of ICLR, 2015..\n6Our bi-directional LSTM implementation is based on torch rnnlib which uses fused LSTM gates https://github.com/facebookresearch/torch-rnnlib/blob/master/rnnlib/cell.lua) and which we consider an effi cient implementation.\nKyunghyun Cho, Bart Van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. On the Properties of Neural Machine Translation: Encoder-decoder Approaches. In Proc. of SssT, 2014a\nJunyoung Chung, Kyunghyun Cho, and Yoshua Bengio. A Character-level Decoder without Explicit Segmentation for Neural Machine Translation. arXiv preprint arXiv:1603.06147, 2016.\nRonan Collobert, Koray Kavukcuoglu, and Clement Farabet. Torch7: A Matlab-like Environment for Machine Learning. In BigLearn, NIPs Workshop. 2011a. URL http : //torch. ch\nRonan Collobert, Jason Weston, Leon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. Natural Language Processing (almost) from scratch. JMLR, 12(Aug):2493-2537, 2011b.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In Proc. of CVPR, 2015\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735-1780, 1997.\nSebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. On Using Very Large. Target Vocabulary for Neural Machine Translation. arXiv preprint arXiv:1412.2007v2, 2014\nSebastien Jean, Orhan Firat, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. Montrea Neural Machine Translation systems for WMT15. In Proc. of wMT. pp. 134-140. 2015\nAndrew Lamb and Michael Xie. Convolutional Encoders for Neural Machine Translation. ht tps /cs224d.stanford.edu/reports/LambAndrew.pdf.2016. Accessed: 2010-10-3\nGurvan L'Hostis, David Grangier, and Michael Auli. Vocabulary Selection Strategies for Neura Machine Translation. arXiv preprint arXiv:1610.00072, 2016..\nMinh-Thang Luong, Ilya Sutskever, Quoc V Le, Oriol Vinyals, and Wojciech Zaremba. Addressing the Rare Word Problem in Neural Machine Translation. In Proc. of ACL, 2015b..\nNal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Koray Kavukcuoglu. Neural Machine Translation in Linear Time. arXiv, 2016\nPhilipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola. Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondej Bojar Alexandra Constantin, and Evan Herbst. Moses: Open Source Toolkit for Statistical Machine. Translation. In Proc. of ACL, 2007.\nRazvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the Difficulty of Training Recurren Neural Networks. ICML (3), 28:1310-1318, 2013.\nNgoc-Quan Pham, Germn Kruszewski, and Gemma Boleda. Convolutional Neural Network Lan guage Models. In Proc. of EMNLP, 2016.\nMarc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level Train ing with Recurrent Neural Networks. In Proc. of ICLR, 2015.\nRico Sennrich, Barry Haddow, and Alexandra Birch. Neural Machine Translation of Rare Words with Subword Units. In Proc. of ACL, 2016a..\nNitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinoy Dropout: a simple way to prevent Neural Networks from overfitting. JMLR. 15:1929-1958. 2014\nSainbayar Sukhbaatar, Jason Weston, Rob Fergus, and Arthur Szlam. End-to-end Memory Net works. In Proc. of NIPS, pp. 2440-2448. 2015.\nIlya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to Sequence Learning with Neural Net works. In Proc. of NIPS, pp. 3104 3112. 2014\nZhaopeng Tu, Baotian Hu, Zhengdong Lu, and Hang Li. Context-dependent Translation selectio using Convolutional Neural Network. In Proc. of ACL-IJCNLP, 2015.\nZichao Yang, Zhiting Hu, Yuntian Deng, Chris Dyer, and Alex Smola. Neural Machine Translatior with Recurrent Attention Modeling. arXiv preprint arXiv:1607.05108, 2016..\nJie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. Deep Recurrent Models with Fast Forward Connections for Neural Machine Translation. arXiv preprint arXiv:1606.04199. 2016.\nHaitao Mi, Zhiguo Wang, and Abe Ittycheriah. Vocabulary Manipulation for Neural Machine Trans. lation. arXiv preprint arXiv:1605.03209, 2016.\nYonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey. Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google's Neural Machine Trans lation System: Bridging the Gap between Human and Machine Translation. arXiv preprini. arXiv:1609.08144, 2016."}, {"section_index": "12", "section_name": "ALIGNMENT VISUALIZATION", "section_text": "In Figure 4 and Figure 5, we plot attention scores for a sample WMT'15 English-German and WMT'14 English-French translation with BiLSTM and deep convolutional encoders. The transla-. tion is on the x-axis and the source sentence on the y-axis..\nThe attention scores of the BiLSTM output are sharp but do not necessarily represent a correc alignment. For CNN encoders the scores are less focused but still indicate an approximate source location, e.g., in Figure 4b, when moving the clause \"over 1,000 people were taken hostage\" to the back of the translation. For some models, attention maxima are consistently shifted by one token a both in Figure 4b and Figure 5b.\nInterestingly, convolutional encoders tend to focus on the last token (Figure 4b) or both the first and last tokens (Figure 5b). Motivated by the hypothesis that the this may be due to the decoder depend- ing on the length of the source sentence (which it cannot determine without position embeddings) we explicitly provided a distributed representation of the input length to the decoder and attention module. However, this did not cause a change in attention patterns nor did it improve translation accuracy.\n29 2-layer BiLSTM 28 Deep Conv. 6/3 Deep Conv. 8/4 27 Deep Conv. 15/5 26 BLEU 25 24 23 22 21 20 1-7 7-9 9-11 11-13 13-15 15-17 17-19 19-21 21-23 23-26 26-28 28-31 31-35 43-85 Range of Sentence Lengths\nFigure 3: BLEU per sentence length on WMT'15 English-German newstest2015. The test set is partitioned into 15 equally-sized buckets according to source sentence length.\nOne characteristic of our convolutional encoder architecture is that the context over which outputs. are computed depends on the number of layers. With bi-directional RNNs, every encoder output depends on the entire source sentence. In Figure 3, we evaluate whether limited context affects the translation quality on longer sentences of WMT'15 English-German which often requires moving verbs over long distances. We sort the newstest2015 test set by source length, partition it into 15. equally-sized buckets, and compare the BLEU scores of models listed in Table 2 on a per-bucket basis.\nThere is no clear evidence for sub-par translations on sentences that are longer than the observable context per encoder output. We include a small encoder with a 6-layer CNN-c and a 3-layer CNN-a in the comparison which performs worse than a 2-layer BiLSTM (23.3 BLEU vs23.6). With 6 convolutional layers at kernel width 3, each encoder output contains information of 13 adjacent source words . Looking at the accuracy for sentences with 15 words or more, this relatively shallow CNN is either on par or better than the BiLSTM for 5 out of 10 buckets; the BiLSTM has access to the entire source context. Similar observations can be made for the deeper convolutional encoders.\n[21] </s> [20] Russia [19] southern- [18], 0.8 [17] Beslan- [16] in- [15] school [14] a [13] at- 0.6 [12] militants- [11] Chechen- [10] by [9] hostage 0.4 [8] taken [7] were [6] people- [5] 1,000- 0.2 [4] over [3] ago [2] years [1] Ten </S etsch (a) 2-layer BiLSTM encoder. [21] </s> [20] Russia [19] southern 0.7 [18] , [17] Beslan 0.6 [16] in- [15] school [14] a 0.5 [13] at [12] militants- 0.4 [11] Chechen- [10] by- [9] hostage 0.3 [8] taken [7] were 0.2 [6] people [5] 1,000 [4] over 0.1 [3] ago [2] years [1] Ten- 8\n[21] </s> [20] Russia [19] southern 0.7 [18], [17] Beslan 0.6 [16] in- [15] school [14] a 0.5 [13] at- [12] militants [11] Chechen- 0.4 [10] by [9] hostage 0.3 [8] taken- [7] were [6] people 0.2 [5] 1,000 [4] over 0.1 [3] ago [2] years- [1] Ten- 0 1 13] 157 18 einer C 5 Jah von tsch Russlands I genommen Vor S im Suden uber L Me als rden ej [a] Beslan .000 nschen Geiseln </s>\nFigure 4: Attention scores for WMT'15 English-German translation for a sentence of newstest2015\nb) Deep convolutional encoder with 15-layer CNN-a and 5-layer CNN-c\n[16] </s> [15] . [14] story- 0.8 [13] the- [12] fabricating [11] to- 0.6 [10] confessed- [9] she [8] before 0.4 [7] days [6] two [5] for [4] Bamford 0.2 [3] interviewed- [2] police [1] Phuket- 1 3 A 5 6 8 La (a) 2-layer BiLSTM encoder. [16] </s> 0.6 [15] . [14] story- 0.5 [13] the [12] fabricating [11] to 0.4 [10] confessed- [9] she- 0.3 [8] before [7] days [6] two- 0.2 [5] for [4] Bamford 0.1 [3] interviewed [2] police [1] Phuket 1] R 3 47 5 [6] 8 9 1.3 5 16 187 pol de P 7 inte huk 1io D Coi 9e (b) Deep convolutional encoder with 20-layer CNN-a and 5-layer CNN-c.\nFigure 5: Attention scores for WMT'14 English-French translation for a sentence of ntst14\n(b) Deep convolutional encoder with 20-layer CNN-a and 5-layer CNN-c"}] |
HkLXCE9lx | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "In recent years, deep reinforcement learning has achieved many impressive results, including play. ing Atari games from raw pixels (Guo et al.]2014] |Mnih et al.]2015} Schulman et al.2015), anc acquiring advanced manipulation and locomotion skills (Levine et al.|2 2016Lillicrap et al.2015 Watter et al.]2015] Heess et al.[2015bf Schulman et al.2015] 2016). However, many of the suc cesses come at the expense of high sample complexity. For example, the state-of-the-art Atari result. require tens of thousands of episodes of experience (Mnih et al.||2015) per game. To master a game. one would need to spend nearly 40 days playing it with no rest. In contrast, humans and animals are capable of learning a new task in a very small number of trials. Continuing the previous example. the human player in[Mnih et al.[(2015) only needed 2 hours of experience before mastering a game. We argue that the reason for this sharp contrast is largely due to the lack of a good prior, whicl. results in these deep RL agents needing to rebuild their knowledge about the world from scratch.\nAlthough Bayesian reinforcement learning provides a solid framework for incorporating prior knowledge into the learning process (Strens2. 2000fGhavamzadeh et al.|[2015} Kolter & Ng2009) exact computation of the Bayesian update is intractable in all but the simplest cases. Thus, practi. cal reinforcement learning algorithms often incorporate a mixture of Bayesian and domain-specific ideas to bring down sample complexity and computational burden. Notable examples include guided. policy search with unknown dynamics (Levine & Abbeel,2014) and PILCO (Deisenroth & Ras-. mussen]2011). These methods can learn a task using a few minutes to a few hours of real experience,. compared to days or even weeks required by previous methods (Schulman et al.][2015]2016} Lilli- crap et al.] 2015). However, these methods tend to make assumptions about the environment (e.g.,. instrumentation for access to the state at learning time), or become computationally intractable in. high-dimensional settings (Wahlstrom et al.2015)"}, {"section_index": "1", "section_name": "RL2: FAST REINFORCEMENT LEARNING VIA SLOW REINFORCEMENT LEARNING", "section_text": "Yan Duan', John Schulman'*, Xi Chen', Peter L. Bartlett', Ilya Sutskever', Pieter Abbeelt? t UC Berkeley, Department of Electrical Engineering and Computer Science # OpenAI rla"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Rather than hand-designing domain-specific reinforcement learning algorithms, we take a different. approach in this paper: we view the learning process of the agent itself as an objective, which can. be optimized using standard reinforcement learning algorithms. The objective is averaged across. all possible MDPs according to a specific distribution, which reflects the prior that we would like. to distill into the agent. We structure the agent as a recurrent neural network, which receives past rewards, actions, and termination flags as inputs in addition to the normally received observations. Furthermore, its internal state is preserved across episodes, so that it has the capacity to perform. learning in its own hidden activations. The learned agent thus also acts as the learning algorithm. and can adapt to the task at hand when deployed..\nWe evaluate this approach on two sets of classical problems, multi-armed bandits and tabular MDPs These problems have been extensively studied, and there exist algorithms that achieve asymptoti- cally optimal performance. We demonstrate that our method, named RL2, can achieve performance. comparable with these theoretically justified algorithms. Next, we evaluate RL2 on a vision-based. navigation task implemented using the ViZDoom environment (Kempka et al.2016), showing that. RL2 can also scale to high-dimensional problems.."}, {"section_index": "3", "section_name": "2.2 FORMULATION", "section_text": "Agent Agent ho h1 h2 h3 h4 h6 ho hs h1 ao a1 a2 ao a1 ao ro,do r1,d1 r2,d2 ro,do r1,d1 ro,do So S1 S2 S( Episode 1 Episode 2 Episode 1 MDP 1 MDP 2 Trial 1 Trial 2\nWe define a discrete-time finite-horizon discounted Markov decision process (MDP) by a tuple M - (S, A, P, r, Po, 7, T), in which S is a state set, A an action set, P : S A S -> R+ a transition. probability distribution, r : S A -> [-Rmax, Rmax] a bounded reward function, Po : S > R+ an initial state distribution, E [0, 1] a discount factor, and T the horizon. In policy search methods,. we typically optimize a stochastic policy e : S A -> R+ parametrized by 0. The objective is to maximize its expected discounted return, n(e) = E, [t=o Y'r(st, at)], where T = (so, o, ..). denotes the whole trajectory, So ~ po(so), at ~ e(at|St), and St+1 ~ P(st+1 St, at)..\nWe assume knowledge of a set of MDPs, denoted by M, and a distribution over them: PM : M -> R+. We only need to sample from this distribution. We use n to denote the total number of episodes allowed to spend with a specific MDP. We define a trial to be such a series of episodes of interaction with a fixed MDP.\nThis process of interaction between an agent and the environment is illustrated in Figure[1 Here each trial happens to consist of two episodes, hence n = 2. For each trial, a separate MDP is drawn from pM, and for each episode, a fresh so is drawn from the initial state distribution specific to the corresponding MDP. Upon receiving an action at produced by the agent, the environment computes reward rt, steps forward, and computes the next state St+1. If the episode has terminated, it sets termination flag dt to 1, which otherwise defaults to 0. Together, the next state st+1, action\nat, reward rt, and termination flag dt, are concatenated to form the input to the policy'I. which conditioned on the hidden state ht+1, generates the next hidden state ht+2 and action at+1. At the. end of an episode, the hidden state of the policy is preserved to the next episode, but not preserve. between trials.\nThe objective under this formulation is to maximize the expected total discounted reward accumu. lated during a single trial rather than a single episode. Maximizing this objective is equivalent t minimizing the cumulative pseudo-regret (Bubeck & Cesa-Bianchi]2012). Since the underlyin. MDP changes across trials, as long as different strategies are required for different MDPs, the agen. must act differently according to its belief over which MDP it is currently in. Hence, the agent i. forced to integrate all the information it has received, including past actions, rewards, and termi. nation flags, and adapt its strategy continually. Hence, we have set up an end-to-end optimizatior. process, where the agent is encouraged to learn a \"fast' reinforcement learning algorithm.."}, {"section_index": "4", "section_name": "2.3 POLICY REPRESENTATION", "section_text": "We represent the policy as a general recurrent neural network. Each timestep, it receives the tuple (s, a, r, d) as input, which is embedded using a function (s, a, r, d) and provided as input to an RNN. To alleviate the difficulty of training RNNs due to vanishing and exploding gradients (Bengio et al.[[1994), we use Gated Recurrent Units (GRUs) (Cho et al.[[2014) which have been demonstrated to have good empirical performance (Chung et al.|2014] Jozefowicz et al.[[2015). The output of the GRU is fed to a fully connected layer followed by a softmax function, which forms the distribution Over actions.\nWe have also experimented with alternative architectures which explicitly reset part of the hidden state each episode of the sampled MDP, but we did not find any improvement over the simple archi tecture described above."}, {"section_index": "5", "section_name": "2.4 POLICY OPTIMIZATION", "section_text": "After formulating the task as a reinforcement learning problem, we can readily use standard off-the- shelf RL algorithms to optimize the policy. We use a first-order implementation of Trust Region Policy Optimization (TRPO) (Schulman et al.]2015), because of its excellent empirical perfor mance, and because it does not require excessive hyperparameter tuning. For more details, we refer the reader to the original paper. To reduce variance in the stochastic gradient estimation, we use a baseline which is also represented as an RNN using GRUs as building blocks. We optionally apply Generalized Advantage Estimation (GAE) (Schulman et al.||2016) to further reduce the variance."}, {"section_index": "6", "section_name": "3 EVALUATION", "section_text": "We designed experiments to answer the following questions.\nFor the first question, we evaluate RL2 on two sets of tasks, multi-armed bandits (MAB) and tabular MDPs. These problems have been studied extensively in the reinforcement learning literature, and this body of work includes algorithms with guarantees of asymptotic optimality. We demonstrate that our approach achieves comparable performance to these theoretically justified algorithms.\n1 To make sure that the inputs have a consistent dimension, we use placeholder values for the initial input tc the policy.\nFor clarity of exposition, we have defined the \"inner\"' problem (of which the agent sees n each trials) to be an MDP rather than a POMDP. However, the method can also be applied in the partially observed setting without any conceptual changes. In the partially observed setting, the agent is faced with a sequence of POMDPs, and it receives an observation ot instead of state st at time t. The visual navigation experiment in Section|3.3] is actually an instance of the this POMDP setting\nCan RL2 learn algorithms that achieve good performance on MDP classes with specia structure, relative to existing algorithms tailored to this structure that have been proposec in the literature? Can RL2 scale to high-dimensional tasks?\nFor the second question, we evaluate RL2 on a vision-based navigation task. Our experiments show. that the learned policy makes effective use of the learned visual information and also short-term information acquired from previous episodes.."}, {"section_index": "7", "section_name": "3.1 MULTI-ARMED BANDITS", "section_text": "Multi-armed bandit problems are a subset of MDPs where the agent's environment is stateless Specifically, there are k arms (actions), and at every time step, the agent pulls one of the arms, say. i, and receives a reward drawn from an unknown distribution: our experiments take each arm to be a Bernoulli distribution with parameter pi. The goal is to maximize the total reward obtained. over a fixed number of time steps. The key challenge is balancing exploration and exploitation- \"exploring\"' each arm enough times to estimate its distribution (pi), but eventually switching over to. \"exploitation\" of the best arm. Despite the simplicity of multi-arm bandit problems, their study has led to a rich theory and a collection of algorithms with optimality guarantees..\nUsing RL2, we can train an RNN policy to solve bandit problems by training it on a given distributior PM. If the learning is successful, the resulting policy should be able to perform competitively witl the theoretically optimal algorithms. We randomly generated bandit problems by sampling eacl parameter pi from the uniform distribution on [0, 1]. After training the RNN policy with RL2, w compared it against the following strategies:\nThe Bayesian methods, Gittins index and Thompson sampling, take advantage of the distributioi PM, and we provide these methods with the true distribution. For each method with hyperparame-. ters, we maximize the score with a separate grid search for each of the experimental settings. Th hyperparameters used for TRPO are shown in the appendix..\nThe results are summarized in Table[1 Learning curves for various settings are shown in Figure2 We observe that our approach achieves performance that is almost as good as the the reference meth-. ods, which were (human) designed specifically to perform well on multi-armed bandit problems. It is worth noting that the published algorithms are mostly designed to minimize asymptotic regret (rather than finite horizon regret), hence there tends to be a little bit of room to outperform them in the finite horizon settings.\nRandom: this is a baseline strategy, where the agent pulls a random arm each time. Gittins index (Gittins 1979): this method gives the Bayes optimal solution in the dis- counted infinite-horizon case, by computing an index separately for each arm, and taking the arm with the largest index. While this work shows it is sufficient to independently com- pute an index for each arm (hence avoiding combinatorial explosion with the number of arms), it doesn't show how to tractably compute these individual indices exactly. We fol- low the practical approximations described in|Gittins et al.(2011),Chakravorty & Mahajan (2013), and|Whittle (1982), and choose the best-performing approximation for each setup. UCB1 (Auer| 2002): this method estimates an upper-confidence bound, and pulls the arm with the largest value of ucb;(t) = i(t -1) + c 7 2 log t mean parameter for the ith arm, T,(t-- 1) is the number of times the ith arm has been pulled, and c is a tunable hyperparameter (Audibert & Munos2011). We initialize the statistics with exactly one success and one failure, which corresponds to a Beta(1, 1) prior. Thompson sampling (TS) (Thompson1933): this is a simple method which, at each time step, samples a list of arm means from the posterior distribution, and choose the best arm according to this sample. It has been demonstrated to compare favorably to UCB1 empir- ically (Chapelle & Li]2011). We also experiment with an optimistic variant (OTS) (May et al.T|2012), which samples N times from the posterior, and takes the one with the highest probability. e-Greedy: in this strategy, the agent chooses the arm with the best empirical mean with probability 1 - e, and chooses a random arm with probability e. We use the same initial- ization as UCB1. Greedy: this is a special case of e-Greedy with e = 0.\nFigure 2: RL2 learning curves for multi-armed bandits. Performance is normalized such that Gittins index scores 1, and random policy scores 0..\nWe observe that there is a noticeable gap between Gittins index and RL2 in the most challenging. scenario, with 50 arms and 500 episodes. This raises the question whether better architectures or better (slow) RL algorithms should be explored. To determine the bottleneck, we trained the. same policy architecture using supervised learning, using the trajectories generated by the Gittins. index approach as training data. We found that the learned policy, when executed in test domains,. achieved the same level of performance as the Gittins index approach, suggesting that there is room. for improvement by using better RL algorithms.."}, {"section_index": "8", "section_name": "3.2 TABULAR MDPS", "section_text": "The bandit problem provides a natural and simple setting to investigate whether the policy learns. to trade off between exploration and exploitation. However, the problem itself involves no sequen. tial decision making, and does not fully characterize the challenges in solving MDPs. Hence, we perform further experiments using randomly generated tabular MDPs, where there is a finite num-. ber of possible states and actions-small enough that the transition probability distribution can be explicitly given as a table. We compare our approach with the following methods:.\nTable 1: MAB Results. Each grid cell records the total reward averaged over 1000 different instances of the bandit problem. We consider k E {5, 10, 50} bandits and n E {10, 100, 500} episodes of interaction. We highlight the best-performing algorithms in each setup according to the computed mean, and we also highlight the other algorithms in that row whose performance is not significantly different from the best one (determined by a one-sided t-test with p = 0.05)..\nSetup Random Gittins TS OTS UCB1 e-Greedy Greedy RL2 n = 10,k = 5 5.0 6.6 5.7 6.5 6.7 6.6 6.6 6.7 n = 10,k = 10 5.0 6.6 5.5 6.2 6.7 6.6 6.6 6.7 n = 10,k = 50 5.1 6.5 5.2 5.5 6.6 6.5 6.5 6.8 n = 100,k = 5 49.9 78.3 74.7 77.9 78.0 75.4 74.8 78.7 n = 100,k = 10 49.9 82.8 76.7 81.4 82.4 77.4 77.1 83.5 n = 100,k = 50 49.8 85.2 64.5 67.7 84.3 78.3 78.0 84.9 n = 500,k = 5 249.8 405.8 402.0 406.7 405.8 388.2 380.6 401.6 n = 500,k = 10 249.0 437.8 429.5 438.9 437.1 408.0 395.0 432.5 n = 500,k = 50 249.6 463.7 427.2 437.6 457.6 413.6 402.8 438.9 oot k=5 k=10 k=10 k=10 k = 50 k = 50 k = 50 Gittins Gittins Gittins 300 600 600 Iteration Iteration Iter ation (a) n = 10 (b) n = 100 (c) n = 500\n. Random: the agent chooses an action uniformly at random for each time step; PSRL (Strens]2000][Osband et al.|2013): this is a direct generalization of Thompson sam- pling to MDPs, where at the beginning of each episode, we sample an MDP from the pos- terior distribution, and take actions according to the optimal policy for the entire episode Similarly, we include an optimistic variant (OPSRL), which has also been explored in Os- band & Van Roy(2016). BEB (Kolter & Ng2009): this is a model-based optimistic algorithm that adds an explo ration bonus to (thus far) infrequently visited states and actions.\nTable 2: Random MDP Results\nThe distribution over MDPs is constructed with [S] = 10, [A] = 5. The rewards follow a Gaus sian distribution with unit variance, and the mean parameters are sampled independently from Normal(1,1). The transitions are sampled from a flat Dirichlet distribution. This construction matches the commonly used prior in Bayesian RL methods. We set the horizon for each episode to be T = 10, and an episode always starts on the first state..\nFigure 3: RL2 learning curves for tabular MDPs. Performance is normalized such that OPSRl scores 1, and random policy scores 0..\nThe results are summarized in Table2] and the learning curves are shown in Figure[3] We follow the same evaluation procedure as in the bandit case. We experiment with n E {10, 25, 50, 75, 100}. For fewer episodes, our approach surprisingly outperforms existing methods by a large margin. The. advantage is reversed as n increases, suggesting that the reinforcement learning problem in the outer. loop becomes more challenging to solve. We think that the advantage for small n comes from the. need for more aggressive exploitation: since there are 140 degrees of freedom to estimate in order. to characterize the MDP, and by the 1Oth episode, we will not have enough samples to form a good estimate of the entire dynamics. By directly optimizing the RNN in this setting, our approach. should be able to cope with this shortage of samples, and decides to exploit sooner compared to the. reference algorithms.\nThe previous two tasks both only involve very low-dimensional state spaces. To evaluate the fea sibility of scaling up RL2, we further experiment with a challenging vision-based task, where the\nUCRL2 (Jaksch et al.| 2010): this algorithm computes, at each iteration, the optimal pol- icy against an optimistic MDP under the current belief, using an extended value iteration procedure. e-Greedy: this algorithm takes actions optimal against the MAP estimate according to the current posterior, which is updated once per episode. Greedy: a special case of e-Greedy with e = 0.\nSetup Random PSRL OPSRL UCRL2 BEB e-Greedy Greedy RL2 n = 10 100.1 138.1 144.1 146.6 150.2 132.8 134.8 156.2 n = 25 250.2 408.8 425.2 424.1 427.8 377.3 368.8 445.7 n = 50 499.7 904.4 930.7 918.9 917.8 823.3 769.3 936.1 n = 75 749.9 1417.1 1449.2 1427.6 1422.6 1293.9 1172.9 1428.8 n = 100 999.4 1939.5 1973.9 1942.1 1935.1 1778.2 1578.5 1913.7\nn=10 n = 25 n=50 n=75 n = 100 OPSRL 0 0 1000 5000 Iteration\nn=10 n=25 n = 50 n = 75 n=100 OPSRL 0 1000 5000 Iteration\nagent is asked to navigate a randomly generated maze to find a randomly placed targe[ The agent. receives a +1 reward when it reaches the target, -0.001 when it hits the wall, and -0.04 per time step to encourage it to reach targets faster. It can interact with the maze for multiple episodes, dur-. ing which the maze structure and target position are held fixed. The optimal strategy is to explore. the maze efficiently during the first episode, and after locating the target, act optimally against the. current maze and target based on the collected information. An illustration of the task is given in. Figure 4\n(a) Sample observation (b) Layout of the 5 5 maze in (a) (c) Layout of a 9 9 maze\nFigure 4: Visual navigation. The target block is shown in red, and occupies an entire grid in the maze layout.\nVisual navigation alone is a challenging task for reinforcement learning. The agent only receives very sparse rewards during training, and does not have the primitives for efficient exploration at the beginning of training. It also needs to make efficient use of memory to decide how it should explore the space, without forgetting about where it has already explored. Previously, [Oh et al.[(2016) have studied similar vision-based navigation tasks in Minecraft. However, they use higher-level actions for efficient navigation. Similar high-level actions in our task would each require around 5 low-level actions combined in the right way. In contrast, our RL2 agent needs to learn these higher-level actions from scratch.\nWe use a simple training setup, where we use small mazes of size 5 5, with 2 episodes of interac tion, each with horizon up to 250. Here the size of the maze is measured by the number of grid cells along each wall in a discrete representation of the maze. During each trial, we sample 1 out of 1000 randomly generated configurations of map layout and target positions. During testing, we evaluate on 1o00 separately generated configurations. In addition, we also study its extrapolation behavior along two axes, by (1) testing on large mazes of size 9 9 (see Figure4c) and (2) running the agent for up to 5 episodes in both small and large mazes. For the large maze, we also increase the horizon per episode by 4x due to the increased size of the maze\nTable 3: Results for visual navigation. These metrics are computed using the best run among all runs shown in Figure|5] In[3c] we measure the proportion of mazes where the trajectory length in the second episode does not exceed the trajectory length in the first episode..\n(b) %oSuccess Episode Small Large Episode Small Large Small Large 1 99.3% 97.1% 91.7% 52.4 1.3 180.1 6.0 1 71.4% 2 39.1 0.9 151.8 5.9 2 99.6% 96.7% 3 42.6 1.0 169.3 6.3 3 99.7% 95.8% 4 43.5 1.1 162.3 6.4 4 99.4% 95.6% 5 43.9 1.1 169.3 6.5 5 99.6% 96.1%\n2Videos for the task are available at https : / /goo . g1/ rDDBpb\n0 -2 4 6 8 10 -12 -14 16 0 500 1000 1500 2000 2500 3000 35 Iteration\n4 6 8 10 12 -14 16 0 500 1000 1500 2000 2500 3000 3500 Iteration\nFigure 5: RL2 learning curves for visual navigation. Each curve shows a different random initial ization of the RNN weights (by using a different random seed). Performance varies greatly across different initializations.\nThe results are summarized in Table[3] and the learning curves are shown in Figure[5] We observe that there is a significant reduction in trajectory lengths between the first two episodes in both the smaller and larger mazes, suggesting that the agent has learned how to use information from past. episodes. It also achieves reasonable extrapolation behavior in further episodes by maintaining its performance, although there is a small drop in the rate of success in the larger mazes. We also observe that on larger mazes, the ratio of improved trajectories is lower, likely because the agent has not learned how to act optimally in the larger mazes..\nStill, even on the small mazes, the agent does not learn to perfectly reuse prior information. An. illustration of the agent's behavior is shown in Figure[6] The intended behavior, which occurs most. frequently, as shown in|6a|and|6b] is that the agent should remember the target's location, and utilize. it to act optimally in the second episode. However, occasionally the agent forgets about where the target was, and continues to explore in the second episode, as shown in|6c|and|6dWe believe that better reinforcement learning techniques used as the outer-loop algorithm will improve these results. in the future.\nFigure 6: Visualization of the agent's behavior. In each scenario, the agent starts at the center of the blue block. and the goal is to reach anywhere in the red block."}, {"section_index": "9", "section_name": "4 RELATED WORK", "section_text": "The concept of using prior experience to speed up reinforcement learning algorithms has been ex. plored in the past in various forms. Earlier studies have investigated automatic tuning of hyper parameters, such as learning rate and temperature (Ishii et al.]2002] Schweighofer & Doya] 2003). as a form of meta-learning. Wilson et al.(2007) use hierarchical Bayesian methods to maintain a posterior over possible models of dynamics, and apply optimistic Thompson sampling according to the posterior. Many works in hierarchical reinforcement learning propose to extract reusable skills. from previous tasks to speed up exploration in new tasks (Singh]1992] Perkins et al.1999). We.\n(a) Good behavior, 1st (b) Good behavior. 2nd (c) Bad behavior, 1st (d) Bad behavior, 2nd episode episode episode episode"}, {"section_index": "10", "section_name": "refer the reader to Taylor & Stone(2009) for a more thorough survey on the multi-task and transfer learning aspects.", "section_text": "More recently, Fu et al. (2015) propose a model-based approach on top of iLQG with unknown. dynamics (Levine & Abbeel2014), which uses samples collected from previous tasks to build. a neural network prior for the dynamics, and can perform one-shot learning on new, but related. tasks thanks to reduced sample complexity. There has been a growing interest in using deep neural networks for multi-task learning and transfer learning (Parisotto et al.]2015). Rusu et al. 2015 2016aDevin et al. 2016} |Rusu et al. 2016b).\nIn the broader context of machine learning, there has been a lot of interest in one-shot learning. for object classification (Vilalta & Drissi 2002] Fei-Fei et al.]2006fLarochelle et al.]2008Lake et al.]2011)[Koch]2015). Our work draws inspiration from a particular line of work (Younger et al. 2001]Santoro et al.||2016][Vinyals et al.2016), which formulates meta-learning as an optimization problem, and can thus be optimized end-to-end via gradient descent. While these work applies to. the supervised learning setting, our work applies in the more general reinforcement learning setting. Although the reinforcement learning setting is more challenging, the resulting behavior is far richer:. our agent must not only learn to exploit existing information, but also learn to explore, a problem that is usually not a factor in supervised learning. Another line of work (Hochreiter et al.]2001 Younger et al.|2001f [Andrychowicz et al.2016f Li & Malik2016) studies meta-learning over the optimization process. There, the meta-learner makes explicit updates to a parametrized model. In comparison, we do not use a directly parametrized policy; instead, the recurrent neural network. agent acts as the meta-learner and the resulting policy simultaneously..\nOur formulation essentially constructs a partially observable MDP (POMDP) which is solved in the. outer loop, where the underlying MDP is unobserved by the agent. This reduction of an unknowr MDP to a POMDP can be traced back to dual control theory (Feldbaum||1960), where \"dual\" refers to the fact that one is controlling both the state and the state estimate. Feldbaum pointed out tha the solution can in principle be computed with dynamic programming, but doing so is usually im. practical. POMDPs with such structure have also been studied under the name \"mixed observability MDPs\" (Ong et al.]2010). However, the method proposed there suffers from the usual challenges. of solving POMDPs in high dimensions..\nApart from the various multiple-episode tasks we investigate in this work, previous literature on training RNN policies have used similar tasks that require memory to test if long-term dependency can be learned. Recent examples include the Labyrinth experiment in the A3C paper (Mnih et al. 2016), and the water maze experiment in the Recurrent DDPG paper (Heess et al.2015a). Although these tasks can be reformulated under the RL2 framework, the key difference is that they focus on the memory aspect instead of the fast RL aspect."}, {"section_index": "11", "section_name": "5 DISCUSSION", "section_text": "This paper suggests a different approach for designing better reinforcement learning algorithms: instead of acting as the designers ourselves, learn the algorithm end-to-end using standard rein forcement learning techniques. That is, the \"fast' RL algorithm is a computation whose state is stored in the RNN activations, and the RNN's weights are learned by a general-purpose \"slow\" re- inforcement learning algorithm. Our method, RL2, has demonstrated competence comparable with theoretically optimal algorithms in small-scale settings. We have further shown its potential to scale to high-dimensional tasks.\nIn the experiments, we have identified opportunities to improve upon RL2: the outer-loop reinforce- ment learning algorithm was shown to be an immediate bottleneck, and we believe that for settings with extremely long horizons, better architecture may also be required for the policy. Although we\nThe formulation of searching for a best-performing algorithm, whose performance is averaged over a given distribution over MDPs, have been investigated in the past in more limited forms (Maes. et al.[ 2011f Castronovo et al.]2012). There, they propose to learn an algorithm to solve multi- armed bandits using program search, where the search space consists of simple formulas composed from hand-specified primitives, which needs to be tuned for each specific distribution over MDPs.. In comparison, our approach allows for entirely end-to-end training without requiring such domain. knowledge."}, {"section_index": "12", "section_name": "REFERENCES", "section_text": "Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul and Nando de Freitas. Learning to learn by gradient descent by gradient descent. arXiv preprint. arXiv:1606.04474, 2016.\nPeter Auer. Using confidence bounds for exploitation-exploration trade-offs. Journal of Machin Learning Research, 3(Nov):397-422, 2002.\nMichael Castronovo, Francis Maes, Raphael Fonteneau, and Damien Ernst. Learning explo ration/exploitation strategies for single trajectory reinforcement learning. In EwRL, pp. 1-10, 2012.\nJhelum Chakravorty and Aditya Mahajan. Multi-armed bandits, gittins index, and its calculation. Methods and Applications of Statistics in Clinical Trials: Planning, Analysis, and Inferential Methods, 2:416-435, 2013.\nJunyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of. gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555. 2014\nAA Feldbaum. Dual control theory. i. Avtomatika i Telemekhanika, 21(9):1240-1249, 1960\nnave used generic methods and architectures for the outer-loop algorithm and the policy, doing this also ignores the underlying episodic structure. We expect algorithms and policy architectures that exploit the problem structure to significantly boost the performance\nWe would like to thank our colleagues at Berkeley and OpenAI for insightful discussions. This research was funded in part by ONR through a PECASE award. Yan Duan was also supported by a. Berkeley AI Research lab Fellowship and a Huawei Fellowship. Xi Chen was also supported by a Berkeley AI Research lab Fellowship. We gratefully acknowledge the support of the NSF through. grant IIS-1619362 and of the ARC through a Laureate Fellowship (FL110100281) and through the ARC Centre of Excellence for Mathematical and Statistical Frontiers..\nYoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradien descent is difficult. IEEE transactions on neural networks. 5(2):157-166. 1994\nJustin Fu, Sergey Levine, and Pieter Abbeel. One-shot learning of manipulation skills with online dynamics adaptation and neural network priors. arXiv preprint arXiv:1509.06841, 2015.\nMohammad Ghavamzadeh, Shie Mannor, Joelle Pineau, Aviv Tamar, et al. Bayesian reinforcemeni learning: a survey. World Scientific, 2015..\nJohn Gittins, Kevin Glazebrook, and Richard Weber. Multi-armed bandit allocation indices. John Wiley & Sons, 2011.\nJohn C Gittins. Bandit processes and dynamic allocation indices. Journal of the Royal Statistica Society. Series B (Methodological), pp. 148-177, 1979\nXiaoxiao Guo, Satinder Singh, Honglak Lee, Richard L Lewis, and Xiaoshi Wang. Deep learning for real-time atari game play using offline monte-carlo tree search planning. In Advances in neural. information processing systems, pp. 3338-3346, 2014.\nNicolas Heess. Jonathan J Hunt. Timothy P Lillicrap, and David Silver. Memory-based control with recurrent neural networks. arXiv preprint arXiv:1512.04455, 2015a.\nSepp Hochreiter, A Steven Younger, and Peter R Conwell. Learning to learn using gradient descent In International Confere ifeiaNouralNots op. 87-94. Springer. 2001\nShin Ishii, Wako Yoshida, and Junichiro Yoshimoto. Control of exploitation-exploration meta parameter in reinforcement learning. Neural networks, 15(4):665-687, 2002.\nThomas Jaksch. Ronald Ortner, and Peter Auer. Near-optimal regret bounds for reinforcement learning. Journal of Machine Learning Research, 11(Apr):1563-1600, 2010\nRafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. An empirical exploration of recur rent network architectures. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pp. 2342-2350, 2015. URL http: //jmlr.org/proceedings/papers/v37/jozefowicz15.html\nMichat Kempka, Marek Wydmuch, Grzegorz Runc, Jakub Toczek, and Wojciech Jaskowski. Viz doom: A doom-based ai research platform for visual reinforcement learning. arXiv preprint arXiv:1605.02097, 2016.\nHugo Larochelle, Dumitru Erhan, and Yoshua Bengio. Zero-data learning of new tasks. In AAA. volume 1, pp. 3, 2008.\nSergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuo motor policies. Journal of Machine Learning Research, 17(39):1-40, 2016.\nKe Li and Jitendra Malik. Learning to optimize. arXiv preprint arXiv:1606.01885, 2016\nNicolas Heess, Gregory Wayne, David Silver, Tim Lillicrap, Tom Erez, and Yuval Tassa. Learning continuous control policies by stochastic value gradients. In Advances in Neural Information Processing Systems, pp. 2944-2952, 2015b.\nTimothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa David Silver. and Daan Wierstra. Continuous control with deep reinforcement learning. arXi preprint arXiv:1509.02971, 2015.\nBenedict C May, Nathan Korda, Anthony Lee, and David S Leslie. Optimistic bayesian sampling ir contextual-bandit problems. Journal of Machine Learning Research, 13(Jun):2069-2106, 2012.\nJunhyuk Oh, Valliappa Chockalingam, Satinder Singh, and Honglak Lee. Control of memory, active perception, and action in minecraft. arXiv preprint arXiv:1605.09128, 2016\nTheodore J Perkins, Doina Precup, et al. Using options for knowledge transfer in reinforcemen learning. University of Massachusetts, Amherst, MA, USA, Tech. Rep, 1999.\nAndrei A Rusu, Sergio Gomez Colmenarejo, Caglar Gulcehre, Guillaume Desjardins, James Kirk patrick, Razvan Pascanu, Volodymyr Mnih, Koray Kavukcuoglu, and Raia Hadsell. Policy distil lation. arXiv preprint arXiv:1511.06295, 2015.\nAndrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. arXiv preprin arXiv:1606.04671, 2016a.\nJohn Schulman, Sergey Levine, Philipp Moritz, Michael I Jordan, and Pieter Abbeel. Trust region policy optimization. CoRR, abs/1502.05477, 2015.\nJohn Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel.. High dimensional continuous control using generalized advantage estimation. In International Con ference on Learning Representations (ICLR2016), 2016\nNicolas Schweighofer and Kenji Doya. Meta-learning in reinforcement learning. Neural Networks 16(1):5-9. 2003.\nSatinder Pal Singh. Transfer of learning by composing solutions of elemental sequential tasks Machine Learning, 8(3-4):323-339, 1992.\nEmilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. Actor-mimic: Deep multitask and transfer reinforcement learning. arXiv preprint arXiv:1511.06342, 2015.\nAdam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. One. shot learning with memory-augmented neural networks. arXiv preprint arXiv:1605.06065, 2016\nMalcolm Strens. A bayesian framework for reinforcement learning. In ICML, pp. 943-950, 2000\nWilliam R Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3/4):285-294, 1933.\nRicardo Vilalta and Youssef Drissi. A perspective view and survey of meta-learning. Artificia Intelligence Review, 18(2):77-95, 2002\nNiklas Wahlstrom, Thomas B Schon, and Marc Peter Deisenroth. From pixels to torques: Policy learning with deep dynamical models. arXiv preprint arXiv:1502.02251, 2015..\nPeter Whittle. Optimization over time. John Wiley & Sons. Inc.. 1982\nAaron Wilson, Alan Fern, Soumya Ray, and Prasad Tadepalli. Multi-task reinforcement learning: a hierarchical bayesian approach. In Proceedings of the 24th international conference on Machine learning, pp. 1015-1022. ACM, 2007.\nOriol Vinyals, Charles Blundell, Timothy Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. Match- ing networks for one shot learning. a1X11 reprint arXiv:1606.04080. 2016\nManuel Watter, Jost Springenberg, Joschka Boedecker, and Martin Riedmiller. Embed to control: A locally linear latent dynamics model for control from raw images. In Advances in Neural Information Processing Systems, pp. 2746-2754, 2015."}, {"section_index": "13", "section_name": "A DETAILED EXPERIMENT SETUP", "section_text": "Common to all experiments: as mentioned in Section 2.2] we use placeholder values when neces sary. For example, at t = O there is no previous action, reward, or termination flag. Since all of our experiments use discrete actions, we use the embedding of the action 0 as a placeholder for actions, and O for both the rewards and termination flags. To form the input to the GRU, we use the values for the rewards and termination flags as-is, and embed the states and actions as described separately below for each experiments. These values are then concatenated together to form the joint embedding.\nFor the neural network architecture, We use rectified linear units throughout the experiments as the hidden activation, and we apply weight normalization without data-dependent initialization (Sali- mans & Kingma 2016) to all weight matrices. The hidden-to-hidden weight matrix uses an orthog- onal initialization (Saxe et al.]2013), and all other weight matrices use Xavier initialization (Glorot & Bengio 2010). We initialize all bias vectors to 0. Unless otherwise mentioned, the policy and the baseline uses separate neural networks with the same architecture until the final layer, where the number of outputs differ.\nAll experiments are implemented using TensorFlow (Abadi et al.] 2016) and rllab (Duan et al. 2016). We use the implementations of classic algorithms provided by the TabulaRL package (Os band 2016).\nThe parameters for TRPO are shown in Table[1] Since the environment is stateless, we use a constani embedding 0 as a placeholder in place of the states. and a one-hot embedding for the actions\nTable 1: Hyperparameters for TRPO: multi-armed bandits\nTable 2: Hyperparameters for TRPO: tabular MDPs\nThe parameters for TRPO are shown in Table 3] For this task, we use a neural network to form the joint embedding. We rescale the images to have width 40 and height 30 with RGB channels preserved, and we recenter the RGB values to lie within range [-1, 1]. Then, this preprocessed\nDiscount 0.99 GAE X 0.3 Policy Iters Up to 1000 #GRU Units 256 Mean KL 0.01 Batch size 250000\nThe parameters for TRPO are shown in Table 2 We use a one-hot embedding for the states and actions separately, which are then concatenated together.\nDiscount 0.99 GAE X 0.3 Policy Iters. Up to 10000 #GRU Units 256 Mean KL 0.01 Batch size. 250000\nTable 3: Hyperparameters for TRPO: visual navigation\nThere are 3 algorithms with hyperparameters: UCB1, Optimistic Thompson Sampling (OTS), anc. e-Greedy. We perform a coarse grid search to find the best hyperparameter for each of them. More specifically:\nTable 4: Best hyperparameter for UCB1\nimage is passed through 2 convolution layers, each with 16 filters of size 5 5 and stride 2. The action is first embedded into a 256-dimensional vector where the embedding is learned, and then. concatenated with the flattened output of the final convolution layer. The joint vector is then fed to. a fully connected layer with 256 hidden units..\nUnlike previous experiments, we let the policy and the baseline share the same neural network. We found this to improve the stability of training baselines and also the end performance of the policy possibly due to regularization effects and better learned features imposed by weight sharing. Similar weight-sharing techniques have also been explored in[Mnih et al. (2016).\nDiscount 0.99 GAE X 0.99 Policy Iters Up to 5000 #GRU Units 256 Mean KL 0.01 Batch size 50000\nUCB1: We test c E {0., 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}. The best found pa rameter for each setting is given in Table4\nSetting Best c n =10,k = 5 0.1 n = 10,k = 10 0.1 n = 10,k = 50 0.1 n = 100,k = 5 0.2 n = 100,k = 10 0.2 n = 100,k = 50 0.2 n = 500,k = 5 0.2 n = 500,k = 10 0.2 n = 500 k = 50 0.2\nTable 5: Best hyperparameter for OTS\nTable 6: Best hyperparameter for e-Greedy\nTable 7: Best hyperparameter for OPSRI\nSetting Best #samples n = 10 14 n = 25 14 n = 50 14 n = 75 14 n = 100 17\nSetting Best #samples n =10,k = 5 15 n = 10,k = 10 14 n = 10,k = 50 19 n = 100,k = 5 8 n = 100,k = 10 20 n = 100,k = 50 16 n = 500,k = 5 7 n = 500,k = 10 20 n = 500,k = 50 20\nSetting Best e n = 10,k = 5 0.0 n = 10,k = 10 0.0 n = 10,k = 50 0.0 n = 100,k = 5 0.0 n = 100,k = 10 0.0 n = 100,k = 50 0.1 n = 500,k = 5 0.1 n = 500,k = 10 0.1 n = 500,k = 50 0.1\nOptimistic PSRL (OPSRL): The hyperparameter is the number of posterior samples. W use up to 20 samples. The best found parameter for each setting is given in Table7\nBEB: We search for the scaling factor in front of the exploration bonus, in the log-linear span of log(0.0001),log(1.0)] with 21 way points. The actual searched parameters are 0.0001, 0.000158, 0.000251, 0.000398, 0.000631, 0.001, 0.001585, 0.002512, 0.003981. 0.00631. 0.01. 0.015849. 0.025119. 0.039811. 0.063096. 0.1. 0.158489, 0.251189 0.398107, 0.630957, 1.0. The best found parameter for each setting is given in Table 8\nTable 8: Best hyperparameter for BEB\nTable 9: Best hyperparameter for e-Greedy\nSetting Best e n = 10 0.1 n = 25 0.1 n = 50 0.1 n = 75 0.1 n = 100 0.1\nSetting Best e n = 10 0.1 n = 25 0.1 n = 50 0.1 n = 75 0.1 n = 100 0.1\nTable 10: Best hyperparameter for UCRL2\nSetting Best scaling n = 10 0.398107 n = 25 0.398107 n = 50 0.398107 n = 75 0.398107 n = 100 0.398107\nIn this section, we provide further analysis of the behavior of RL? agent in comparison with the baseline algorithms, on the multi-armed bandit task. Certain algorithms such as UCB1 are designed not in the Bayesian context; instead they are tailored to be robust in adversarial cases. To highlight this aspect, we evaluate the algorithms on a different metric, namely the percentage of trials where the best arm is recovered. We treat the best arm chosen by the policy to be the arm that has been pulled most often, and the ground truth best arm is the arm with the highest mean parameter. In addition, we split the set of all possible bandit tasks into simpler and harder tasks, where the difficulty is measured by the e-gap between the mean parameter of the best arm and the second best arm. We compare the percentage of recovering the best arm separately according to the e gap, as shown in Table11\nSetting Best scaling n = 10 0.002512 n = 25 0.001585 n = 50 0.001585 n = 75 0.001585 n = 100 0.001585\ne-Greedy: We test e E {0., 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}. The best foun. parameter for each setting is given in Table9\nTable 11: Percentage of tasks where the best arm is chosen most frequently, with k = 5 arms anc n = 500 episodes of interaction\nNote that there are two columns associated with the UCB1 algorithm, where UCB1 (without \"*\") is. evaluated with c = 0.2, the parameter that gives the best performance as evaluated by the average. total reward, and UCB1* uses c = 1.0. Surprisingly, although using c = 1.0 performs the best in. terms of recovering the best arm, its performance is significantly worse than using c = 0.2 when evaluated under the average total reward (369.2 2.2 vs. 405.8 2.2). This also explains that. although RL2 does not perform the best according to this metric (which is totally expected, since. it is not optimized under this metric), it achieves comparable average total reward as other best-. performing methods."}, {"section_index": "14", "section_name": "REFERENCES", "section_text": "Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.\nXavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neura networks. In Aistats, volume 9, pp. 249-256, 2010.\nIan Osband. TabulaRL.https: //github. com/iosband/TabulaRL 2016.\nTim Salimans and Diederik P Kingma. Weight normalization: A simple reparameterization to ac celerate training of deep neural networks. arXiv preprint arXiv:1602.07868, 2016.\nAndrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynar ics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120, 2013..\nSetup Random Gittins TS OTS UCB1 UCB1* e-Greedy Greedy RL2 [0, 0.01] 21.5% 51.1% 53.1% 52.8% 50.9% 56.5% 37.3% 38.3% e E 46.1% [0.01, 0.05] 19.3% 59.5% 71.2% e E 67.4% 62.5% 76.3% 42.3% 41.3% 55.1% e E [0.05, 0.1] 17.7% 71.2% 91.5% 84.0% 78.9% 94.6% 46.1% 45.7% 67.4% [0.1, 0.3] 20.1% 92.7% 99.2% 95.3% 93.5% 99.9% 58.1% 58.4% 87.1% e E e E [0.3, 0.5] 20.4% 99.6% 100.0% 99.5% 99.8% 100.0% 85.4% 84.6% 99.0% e E [0.5, 1.0] 19.4% 100.0% 100.0% 100.0% 100.0% 100.0% 98.4% 99.1% 100.0\nYan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. Benchmarking deep reinforcement learning for continuous control. arXiv preprint arXiv:1604.06778, 2016"}] |
S1dIzvclg | [{"section_index": "0", "section_name": "A RECURRENT NEURAL NETWORK WITHOUT CHAOS", "section_text": "Thomas Laurent\nDepartment of Mathematics Loyola Marymount University Los Angeles, CA 90045, USA\nWe introduce an exceptionally simple gated recurrent neural network (RNN) that achieves performance comparable to well-known gated architectures, such as LSTMs and GRUs, on the word-level language modeling task. We prove that our model has simple, predicable and non-chaotic dynamics. This stands in stark con- trast to more standard gated architectures, whose underlying dynamical systems exhibit chaotic behavior."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Gated recurrent neural networks, such as the Long Short Term Memory network (LSTM) introduced by|Hochreiter & Schmidhuber (1997) and the Gated Recurrent Unit (GRU) proposed by|Cho et al. (2014), prove highly effective for machine learning tasks that involve sequential data. We propose. an exceptionally simple variant of these gated architectures. The basic model takes the form.\nwhere O stands for the Hadamard product. The horizontal/forget gate (i.e. 0t) and the vertical/input gate (i.e. nt) take the usual form used in most gated RNN architectures. Specifically.\n0t := (Ueht-1 + Vext+ be and nt :=(Unht-1+ Vnxt+ bn\n10 if t = T (Wxt)(i) otherwise\nwhere (Wxt)(i) stands for the ith component of the vector Wxt. In other words we consider a input sequence xt for which the learned ith feature (Wxt)(i) remains off except at time T. Whe initialized from ho = 0, the corresponding response of the network to this \"impulse\"' in the it feature is\nwith at a sequence that relaxes toward zero. The forget gate Ot control the rate of this relaxation. Thus ht(i) activates when presented with a strong ith feature, and then relaxes toward zero unti. the data present the network once again with strong ith feature. Overall this leads to a dynamically. simple model, in which the activation patterns in the hidden states of the network have a clear cause and predictable subsequent behavior.\nJames von Brecht\njames.vonbrecht@csulb.edu"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "ht = 0t O tanh(ht-1) + nt O tanh(Wxt)\n(0 ift <T nT if t = T Qt ift>T\nDynamics of this sort do not occur in other RNN models. Instead, the three most popular recurrent neural network architectures, namely the vanilla RNN, the LSTM and the GRU, have complex irregular, and unpredictable dynamics. Even in the absence of input data, these networks can give rise to chaotic dynamical systems. In other words, when presented with null input data the activation patterns in their hidden states do not necessarily follow a predictable path. The proposed network (1-(2) has rather dull and minimalist dynamics in comparison; its only attractor is the zero state,.\nand so it stands at the polar-opposite end of the spectrum from chaotic systems. Perhaps surprisingly. at least in the light of this comparison, the proposed network (1) performs as well as LSTMs and. GRUs on the word level language modeling task. We therefore conclude that the ability of an RNN. to form chaotic temporal dynamics, in the sense we describe in Section 2, cannot explain its success on word-level language modeling tasks.\nIn the next section, we review the phenomenon of chaos in RNNs via both synthetic examples anc trained models. We also prove a precise, quantified description of the dynamical picture (3)-(4) foi the proposed network. In particular, we show that the dynamical system induced by the proposec network is never chaotic, and for this reason we refer to it as a Chaos-Free Network (CFN). The final section provides a series of experiments that demonstrate that CFN achieve results comparabl to LSTM on the word-level language modeling task. All together, these observations show tha an architecture as simple as (1)-(2) can achieve performance comparable to the more dynamically complex LSTM.\nThe study of RNNs from a dynamical systems point-of-view has brought fruitful insights into generic features of RNNs (Sussillo & Barak]2013] Pascanu et al.]2013). We shall pursue a brief investigation of CFN, LSTM and GRU networks using this formalism, as it allows us to identify key distinctions between them. Recall that for a given mapping : Rd +> Rd, a given initial time to E N and a given initial state uo E Rd, a simple repeated iteration of the mapping\nUt = (ut-1, W1xt, W2xt,..., Wkxt)\nUt =Ut-1 (u):=(u,0,0,...,0)\nwhich we refer to as the dynamical system induced by the recurrent neural network. The time nvariant system (6) is much more tractable than (5), and it offers a mean to investigate the inne. vorking of a given architecture; it separates the influence of input data xt, which can produc essentially any possible response, from the model itself. Studying trajectories that are not influencec. y external data will give us an indication on the ability of a given RNN to generate complex an sophisticated trajectories by its own. As we shall see shortly, the dynamical system induced by CFN has excessively simple and predictable trajectories: all of them converge to the zero state. Ir. other words, its only attractor is the zero state. This is in sharp contrast with the dynamical systems. induced by LSTM or GRU, who can exhibit chaotic behaviors and have strange attractors..\nWe refer to Bertschinger & Natschlager (2004) for a study of the chaotic behavior of a simplifiec vanilla RNN with a specific statistical model, namely an i.i.d. Bernoulli process, for the input data as well as a specific statistical model. namely i.i.d. Gaussian. for the weights of the recurrence matrix\nUt+1 = (ut) to Uto = Uo t=to\ndefines a discrete-time dynamical system. The index t E N represents the current time, while the. point ut E Rd represents the current state of the system. The set of all visited states O+(uo) :=. {uto, to+1, ... , Uto+n, ...} defines the forward trajectory or forward orbit through uo. An attractor. for the dynamical system is a set that is invariant (any trajectory that starts in the set remains in. the set) and that attracts all trajectories that start sufficiently close to it. The attractors of chaotic dynamical systems are often fractal sets, and for this reason they are referred to as strange attractors..\nwhere xt denotes the tth input data point. For example, in the case of the CFN (1)-2), we have. W1 = W, W2 = Ve and W3 = Vn. To gain insight into the underlying design of the architecture of an RNN, it proves usefull to consider how trajectories behave when they are not influenced by any external input. This lead us to consider the dynamical system.\nThe learned parameters W; in (5) describe how data influence the evolution of hidden states at each time step. From a modeling perspective, (6) would occur in the scenario where a trained RNN has learned a weak coupling between a specific data point xt, and the hidden state at that time, in the. sense that the data influence is small and so all W,xto ~ O nearly vanish. The hidden state then. transitions according to ut, ~ (uto-1, 0, 0,..., 0) = (uto-1)\n0.26 0.065 0.053 -0.2 0.1 0.25 0.035 0.0512 h, 0.064 0.08 0.0746 0.0755\nFigure 1: Strange attractor of a 2-unit LSTM. Successive zooms (from left to right) reveal the self repeating, fractal nature of the attractor. Colored boxes depict zooming regions..\nht oO tanh(fOc+iOg). Ut Ct fOc+iOg i:=o(W;h+bi) f:=o(Wfh+bf) o:=o(W,h+bo) g := tanh(Wgh + bq\ninduced by a two-unit LSTM with weight matrices\n-2 6 1 4 4 -1 -6 1 W: W. W 3 2 0 0 -6 -9 6\nand zero bias for the model parameters. These weights were randomly generated from a normal distribution with standard deviation 5 and then rounded to the nearest integer. Figure 1(a) was obtained by choosing an initial state uo = (ho, co) uniformly at random in [0, 1]2 [0,1]2 and plotting the h-component of the iterates ut = (ht, ct) for t between 103 and 105 (so this figure should be regarded as a two dimensional projection of a four dimensional attractor, which explain its tangled appearance). Most trajectories starting in 0, 1[2 [0, 1[2 converge toward the depicted attractor. The resemblance between this attractor and classical strange attractors such as the Henon attractor is striking (see Figure|5lin the appendix for a depiction of the Henon attractor). Successive zooms on the branch of the LSTM attractor from Figure[1(a) reveal its fractal nature. Figure[1(b) is an enlargement of the red box in Figure[1[a), and Figure[1[c) is an enlargement of the magenta box in Figure|1(b). We see that the structure repeats itself as we zoom in.\nThe most practical consequence of chaos is that the long-term behavior of their forward orbits car exhibit a high degree of sensitivity to the initial states uo. Figure 2 provides an example of sucl behavior for the dynamical system (7)-(9). An initial condition uo was drawn uniformly at randonr in [0, 1]2 [0, 1]2. We then computed 100, 000 small amplitude perturbations o of uo by adding a small random number drawn uniformly from [10-7, 10-7] to each component. We then iteratec (7)-(9) for 200 steps and plotted the h-component of the final state 2oo for each of the 100, 000 trials on Figure2[a). The collection of these 100, 000 final states essentially fills out the entire attractor, despite the fact that their initial conditions are highly localized (i.e. at distance of no more than 10-7) around a fixed point. In other words, the time t = 200 map of the dynamical systen will map a small neighborhood around a fixed initial condition uo to the entire attractor. Figure|2[b additionally illustrates this sensitivity to initial conditions for points on the attractor itself. We take an initial condition uo on the attractor and perturb it by 10-7 to a nearby initial condition uo. We then plot the distance , ut between the two corresponding trajectories for the first 200 time steps. After an initial phase of agreement, the trajectories strongly diverge.\nThe synthetic example (7)-(9) illustrates the potentially chaotic nature of the LSTM architecture We now show that chaotic behavior occurs for trained models as well, and not just for synthetically generated instances. We take the parameter values of an LSTM with 228 hidden units trained on the\nIn this subsection we briefly show that LSTM and GRU, in the absence of input data, can lead to dynamical systems ut = (ut-1) that are chaotic in the classical sense of the term (Strogatz2014) Figure|1[depicts the strange attractor of the dynamical system:.\n0.3 0.3 0.2 0.1 0 0.2 -0.1 0.25 50 100 150 200 (a) Final state u2oo for 10 trials (b) Distance ||ut - ut| between 2 trajectories\nFigure 2: (a): A small neighborhood around a fixed initial condition uo, after 200 iterations, is mapped to the entire attractor. (b): Two trajectories starting starting within 10-7 of one another. strongly diverge after 50 steps.\n50 100 800 1600 (a) No input data (b) With input data\n800 1600 50 100 (b) With input data\nFigure 3: 228-unit LSTM trained on Penn Treebank. (a): In the absence of input data, the system. is chaotic and nearby trajectories diverge. (b): In the presence of input data, the system is mostly driven by the external input. Trajectories starting far apart converge..\nPenn Treebank corpus without dropout (c.f. the experimental section for the precise procedure). We. then set all data inputs xt to zero and run the corresponding induced dynamical system. Two trajec-. tories starting from nearby initial conditions uo and o were computed (as before o was obtained. by adding to each components of uo a small random number drawn uniformly from [-10-7, 10-7]) Figure 3(a) plots the first component h(1) of the hidden state for both trajectories over the first. 1600 time steps. After an initial phase of agreement, the forward trajectories O+(uo) and O+(uo). strongly diverge. We also see that both trajectories exhibit the typical aperiodic behavior that char-. acterizes chaotic systems. If the inputs x do not vanish, but come from actual word-level data, then the behavior is very different. The LSTM is now no longer an autonomous system whose dynamics. are driven by its hidden states, but a time dependent system whose dynamics are mostly driven by. the external inputs. Figure[3(b) shows the first component h(1) of the hidden states of two trajecto. ries that start with initial conditions uo and uo that are far apart. The sensitivity to initial condition disappears, and instead the trajectories converge toward each other after about 70 steps. The mem- ory of this initial difference is lost. Overall these experiments indicate that a trained LSTM, when it. is not driven by external inputs, can be chaotic. In the presence of input data, the LSTM becomes a. forced system whose dynamics are dominated by external forcing.\nThe dynamical behavior of the CFN is dramatically different from that of the LSTM. In this sub. section we start by showing that the hidden states of the CFN activate and relax toward zero in a predictable fashion in response to input data. On one hand, this shows that the CFN cannot produce. non-trivial dynamics without some influence from data. On the other, this leads to an interpretable. model; any non-trivial activations in the hidden states of a CFN have a clear cause emanating from.\nLike LSTM networks, GRU can also lead to dynamical systems that are chaotic and they can alsc. nave strange attractors. The depiction of such an attractor, in the case of a two-unit GRU, is providec n Figure|6of the appendix.\ndata-driven activation. This follows from a precise, quantified description of the intuitive picture (3)-(4) sketched in the introduction.\nWe begin with the following simple estimate that sheds light on how the hidden states of the CFI activate and then relax toward the origin..\nLemma 1. For any T. k > 0 we have\nH |hT+k(i)| < Ok |hT(i)|+ (Wxt)(i)| max T<t<T+k\ne = 0t(i) max and H = max nt(i) T<t<T+k T<t<T+k\nThis estimate shows that if during a time interval T. T one of\nProof of Lemma1] Using the non-expansivity of the hyperbolic tangent, i.e.tanh(x)] < [x], anc the triangle inequality, we obtain from (1)\nht(i)]<O|ht-1(i)]+H max (Wxt)(i) T<t<T+k\n|hT+k(i)|< Ok|hT(i)]+ H (Wxt)(i) max T<t<T+k\nLemma 2. Starting from any initial state uo, the trajectory O+(uo) will eventually converge to the zero state. That is, limt->+oo ut = O regardless of the the initial state uo..\nProof. From the definition of we clearly have that the sequence defined by ut+1 = (ut) satisfies -1 < ut(i) < 1 for all t and all i. Since the sequence ut is bounded, so is the sequence vt := Ueut + be. That is there exists a finite C > O such that (Ueut)(i) + be(i) < C for all t and i. Using the non-expansivity of the hyperbolic tangent, we then obtain that ut(i)] o(C)|ut-1(i)], for all t and all i. We conclude by noting that 0 < o(C) < 1.\nLemma|2Jremains true for a multi-layer CFN, that is, a CFN in which the first layer is defined by 1 and the subsequent layers 2 l L are defined by:\ntanh( tanh(w(e)\nAssume that Wxt = 0 for all t > T, then an extension of the arguments contained in the proof of the two previous lemmas shows that\nC(1+) +k\nwhere O and H are the maximum values of the ith components of the 0 and n gate in the time interval T, T + k|, that is:\n(i) the embedded inputs Wxt have weak ith feature (i.e. maxT<t<T+k |(Wxt)(i)| is smal (ii) or the input gates nt have their ith component close to zero (i.e. H is small),\noccurs then the ith component of the hidden state ht will relaxes toward zero at a rate that depends on the value of the ith component the the forget gate. Overall this leads to the following simple picture: ht(i) activates when presented with an embedded input Wxt with strong ith feature, and then relaxes toward zero until the data present the network once again with strong ith feature. The strength of the activation and the decay rate are controlled by the ith component of the input and. forget gates. The proof of Lemma1is elementary\nwhenever t is in the interval [T, T + k]. Iterating this inequality and summing the geometric series then gives\nUt = ht, u > (u) := (Ueu + be) O tanh(u)\nunit 41 unit 28 unit 46 unit 61 unit 136 unit 68 unit 138 unit 81 unit 141 unit 129 0.5 unit 144 0.5 unit 131 unit 161 unit 141 unit 192 unit 179 aeeeereonn unit 193 aeeeerreonn unit 200 unit 196 unit 219 0.5 0.5 1 1000 1100 1200 1300 1000 1100 1200 1300 time time (a) First layer (b) Second layer\nInequality (11) shows that higher levels (i.e. larger l) decay more slowly, and remain non-trivial,. while earlier levels (i.e. smaller l) decay more quickly. We illustrate this behavior computationally with a simple experiment. We take a 2-layer, 224-unit CFN network trained on Penn Treebank and feed it the following input data: The first 1000 inputs xt are the first 1000 words of the test set of. Penn Treebank; All subsequent inputs are zero. In other words, xt = 0 if t > 1000. For each of the. two layers we then select the 10 units that decay the slowest after t > 1000 and plot them on Figure. 4 The figure illustrates that the second layer retains information for much longer than the first layer. To quantify this observation we define the relaxation time (or half-life) of the ith unit as the smallest. T such that\nh1000+T(i)] < 0.5|h1000(i)]\nUsing this definition yields average relaxation times of 2.2 time steps for the first layer and 23.2 time steps for the second layer. The first layer has a standard deviations of approximately 5 steps while the second layer has a standard deviation of approximately 75 time steps. A more fine-grained analysis reveals that some units in the second layer have relaxation times of several hundred steps.. For instance, if instead of averaging the relaxation times over the whole layer we average them over. the top quartile (i.e. the 25% units that decay the most slowly) we get 4.8 time steps and 85.6 time steps for the first and second layers, respectively. In other words, by restricting attention to long-term. units the difference between the first and second layers becomes much more striking..\nOverall, this experiment conforms with the analysis (11), and indicates that adding a third or fourth layer would potentially allow a multi-layer CFN architecture to retain information for even longer."}, {"section_index": "3", "section_name": "3 EXPERIMENTS", "section_text": "In this section we show that despite its simplicity, the CFN network achieves performance compa. rable to the much more complex LSTM network on the word level language modeling task. We. use two datasets for these experiments, namely the Penn Treebank corpus (Marcus et al.|1993. and the Text8 corpus (Mikolov et al.]2014). We consider both one-layer and two-layer CFNs anc. LSTMs for our experiments. We train both CFN and LSTM networks in a similar fashion and al. ways compare models that use the same number of parameters. We compare their performance witl. and without dropout, and show that in both cases they obtain similar results. We also provide result published in[Mikolov et al. (2014),Jozefowicz et al.(2015) and Sukhbaatar et al.(2015) for the sake of comparison.\nwhere 0 < O < 1 is the maximal values for the input gates involved in layer 1 to l of the network and C > 0 is some constant depending only on the norms ||w()||o of the matrices and the sizes h)| of the initial conditions at all previous 1 j l levels. Estimate (11) shows that Lemma2 remains true for multi-layer architectures.\nFigure 4: A 2-layer, 224-unit CFN trained on Penn Treebank. All inputs xt are zero after t = 1000. i.e. the time-point indicated by the dashed line. At left: plot of the 10 \"slowest' units of the first. layer. At right: plot of the 10 slowest units of the second layer. The second layer retains information much longer than the first layer.\nTable 1: Experiments on Penn Treebank without dropout\nModel Size Training Val. perp. Test perp. Vanilla RNN 5M parameters Jozefowicz et al. 2015 122.9 GRU 5M parameters Jozefowicz et al 2015 108.2 LSTM 5M parameters 2015 109.7 Jozefowicz et al. - LSTM (1 layer) 5M parameters Trained by us 108.4 105.1 CFN (2 layers) 5M parameters Trained by us 109.3 106.3\nTable 2: Experiments on Text8 without dropout\nModel Size Training Perp. on development set Vanilla RNN 500 hidden units. Mikolov et al.. (2014) 184 SCRN 500 hidden units Mikolov et al.. 2014 161 LSTM 500 hidden units Mikolov et al.. (2014 156 MemN2N 500 hidden units. Sukhbaatar et al.(2015) 147 LSTM (2 layers) 46.4M parameters Trained by us. 139.9 CFN (2 layers) 46.4M parameters Trained by us. 142.0\nFor concreteness, the exact implementation for the two-layer architecture of our model i\n0 Dropa tanr O tanh(W(1)) 1 O tanh(W(2) Droa Yt = LogSoftmax(W(3)\nwhere Drop(z, p) denotes the dropout operator with a probability p of setting components in z tc zero. We compute the gates according to\nwhere and\nand thus the model has two dropout hyperparameters. The parameter p controls the amount of. dropout between layers; the parameter q controls the amount of dropout inside each gate. We use a. similar dropout strategy for the LSTM, in that all sigmoid gates f, o and i receive the same amount q of dropout.\nTo train the CFN and LSTM networks, we use a simple online steepest descent algorithm. We updat the weights w via\nwhere lr is the learning rate and wL denotes the approximate gradient of the loss with respect tc the weights as estimated from a certain number of presented examples. We use the usual backprop agation through time approximation when estimating the gradient: we unroll the net T steps in the past and neglect longer dependencies. In all experiments, the CFN and LSTM networks are unrolled for T = 35 steps and we take minibatches of size 20. As all search directions p have Euclidean norm ||pll2 = 1, we perform no gradient clipping during training.\nWe initialize all the weights in the CFN, except for the bias of the gates, uniformly at random in [0.07, 0.07]. We initialize the bias be and bn of the gates to 1 and -1, respectively, so that at the beginning of the training 0t ~ o(1) ~ 0.73 and nt ~ o(-1) ~ 0.23. We initialize the weights of the LSTM in exactly the same way; the bias for the forget and input gate are initialized to 1 and -1, and all the other weights are initialized uniformly in [0.07, 0.07]. This initialization scheme favors\n(0 Drop ) tanh( O tanh(W(1). 1) Drop tanh( O tanh(W(2)1 3\nW(k+1) )=w(k)-lrp where V , L p 2\nTable 3: Experiments on Penn Treebank with dropout\nModel Size Training Val. perp. Test perp. Vanilla RNN 20M parameters Jozefowicz et al.. 2015 103.0 97.7 GRU 20M parameters 2015 95.5 91.7 Jozefowicz et al. LSTM 20M parameters Jozefowicz et al.. 2015 83.3 78.8 LSTM (2 layers) 20M parameters Trained by us 78.4 74.3 CFN (2 layers) 20M parameters Trained by us 79.7 74.9 LSTM (2 layers) 50M parameters Trained by us 75.9 71.8 CFN (2 layers) 50M parameters Trained by us 77.0 72.2\nDataset Construction. The Penn Treebank Corpus has 1 million words and a vocabulary size of 10,000. We used the code fromZaremba et al.(2014) to construct and split the dataset into a training set (929K words), a validation set (73K words) and a test set (82K words). The Text8 corpus has 100 million characters and a vocabulary size of 44,000. We used the script from Mikolov et al.[(2014) to construct and split the dataset into a training set (first 99M characters) and a development set (last 1M characters).\nExperiments without Dropout. Tables1and2|provide a comparison of various recurrent networl architectures without dropout evaluated on the Penn Treebank corpus and the Text8 corpus. The las two rows of each table provide results for LSTM and CFN networks trained and initialized in the. manner described above. We have tried both one and two layer architectures, and reported only th best result. The learning rate schedules used for each network are described in the appendix..\nWe also report results published in Jozefowicz et al.(2015) were a vanilla RNN, a GRU and ar LSTM network were trained on Penn Treebank, each of them having 5 million parameters (only the test perplexity was reported). Finally we report results published in[Mikolov et al.[(2014) and Sukhbaatar et al. (2015) where various networks are trained on Text8. Of these four networks, only the LSTM network from Mikolov et al.(2014) has the same number of parameters than the CFN and LSTM networks we trained (46.4M parameters). The vanilla RNN, Structurally Constrainec Recurrent Network (SCRN) and End-To-End Memory Network (MemN2N) all have 500 units, bu less than 46.4M parameters. We nonetheless indicate their performance in Table2|to provide some context.\nExperiments with Dropout. Table 3 provides a comparison of various recurrent network archi tectures with dropout evaluated on the Penn Treebank corpus. The first three rows report results published in (Jozefowicz et al.[2015) and the last four rows provide results for LSTM and CFN networks trained and initialized with the strategy previously described. The dropout rate p and q are. chosen as follows: For the experiments with 20M parameters, we set p = 55% and q = 45% for the CFN and p = 60% and q = 40% for the LSTM; For the experiments with 50M parameters, we set p = 65% and q = 55% for the CFN and p = 70% and q = 50% for the LSTM."}, {"section_index": "4", "section_name": "4 CONCLUSION", "section_text": "Despite its simple dynamics, the CFN obtains results that compare well against LSTM network. and GRUs on word-level language modeling. This indicates that it might be possible, in general, tc. build RNNs that perform well while avoiding the intricate, uninterpretable and potentially chaotic. dynamics that can occur in LSTMs and GRUs. Of course, it remains to be seen if dynamicall. simple RNNs such as the proposed CFN can perform well on a wide variety of tasks, potentiall. equiring longer term dependencies than the one needed for word level language modeling. The. experiments presented in Section 2 indicate a plausible path forward - activations in the highe. ayers of a multi-layer CFN decay at a slower rate than the activations in the lower layers. In theory complexity and long-term dependencies can therefore be captured using a more \"feed-forward. approach (i.e. stacking layers) rather than relying on the intricate and hard to interpret dynamics o. an LSTM or a GRU.\nOverall, the CFN is a simple model and it therefore has the potential of being mathematically well. understood. In particular, Section 2 reveals that the dynamics of its hidden states are inherently mor interpretable than those of an LSTM. The mathematical analysis here provides a few key insights. into the network, in both the presence and absence of input data, but obviously more work is needec. before a complete picture can emerge. We hope that this investigation opens up new avenues ol. inquiry, and that such an understanding will drive subsequent improvements.."}, {"section_index": "5", "section_name": "REFERENCES", "section_text": "Michel Henon. A two-dimensional mapping with a strange attractor. Communications in Mathe matical Physics, 50(1):69-77, 1976\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8) 1735-1780, 1997.\nTomas Mikolov, Armand Joulin, Sumit Chopra, Michael Mathieu, and Marc'Aurelio Ranzato Learning longer memory in recurrent neural networks. arXiv preprint arXiv:1412.7753, 2014.\nRazvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. 1CML (3), 28:1310-1318, 2013\nSteven H Strogatz. Nonlinear dynamics and chaos: with applications to physics, biology, chemistry and engineering. Westview press, 2014.\nDavid Sussillo and Omri Barak. Opening the black box: low-dimensional dynamics in high dimensional recurrent neural networks. Neural computation, 25(3):626-649, 2013.\nWojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization arXiv preprint arXiv:1409.2329, 2014.\nMitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313-330, 1993.\nSainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End-to-end memory networks. In Advances in neural information processing systems, pp. 2440-2448, 2015."}, {"section_index": "6", "section_name": "APPENDIX", "section_text": "Yt+1 = bxt,\nwith parameters set to a = 1.4 and b = 0.3. We obtain Figure 5(a) by choosing the initial state (xo, yo) = (0, 0) and plotting the iterates (xt, yt) for t between 103 and 105. All trajectories starting close to the origin at time t = 0 converge toward the depicted attractor. Successive zooms on the branch of the attractor reveal its fractal nature. The structure repeats in a fashion remarkably similar to the 2-unit LSTM in Section 2.\n0.21 0.191 -0.4 -1.5 1.5 0.15 0.185 0.53 0.75 0.621 0.64 X\nFigure 5: Strange attractor of the Henon map. From left to right: The Henon attractor, enlargement of the red box, enlargement of the magenta box.\nStrange attractor of a 2-unit GRU. As with LSTMs, the GRU gated architecture can induce chaotic dynamical system. Figure|6 depicts the strange attractor of the dynamical system\n[0 1 [0 1 -5 8 Wz = Wr = U = 1 1 1 0 8 5\nand zero bias for the model parameters. Here also successive zooms on the branch of the attractor reveal its fractal nature. As in the LSTM, the forward trajectories of this dynamical system exhibit. a high degree of sensitivity to initial states..\n0.12 0.104 2 -0.6 -0.8 0.8 0.06 0.0995 0.435 0.48 0.452 0.4545 h\nFigure 6: Strange attractor of a two-unit GRU. Successive zooms reveal the fractal nature of th attractor.\nStrange attractor of the Henon map. For the sake of comparison, we provide in Figure|5|a depic- tion of a well-known strange attractor (the Henon attractor) arising from a discrete-time dynamical system. We generate these pictures by reproducing the numerical experiments from[Henon (1976) The discrete dynamical system considered here is the two dimensional map\nUt = ht, u>(u):=(1-z) Ou+ zOtanh(U(rOu) z:=(Wzu+bz) r:= (Wru+ br),\nFor both experiments without dropout (Table 1 and 2), we used a simple and aggressive learning. rate schedule: at each epoch, lr is divided by 3. For the CFN the initial learning rate was chosen to be lro = 5.5 for PTB and lro = 5 for Text8. For the LSTM we chose lro = 7 for PTB and lro = 5 for Text8.\nIn the Penn Treebank experiment with dropout (Table 3), the CFN with 20M parameters has two hidden layers of 731 units each and the LSTM with 20M parameters trained by us has two hidden layers of 655 units each. We also tried a one-layer LSTM with 20M parameters and it led to similar but slightly worse results than the two-layer architecture. For both network, the learning rate was divided by 1.1 each time the validation perplexity did not decrease by at least 1%. The initial learning rate were chosen to be lro = 7 for the CFN and lro = 5 for the LSTM.\nNetwork sizes and learning rate schedules used in the experiments. In the Penn Treebank ex. periment without dropout (Table 1), the CFN network has two hidden layers of 224 units each for. a total of 5 million parameters. The LSTM has one hidden layer with 228 units for a total of 5. million parameters as well. We also tried a two-layer LSTM with 5 million parameters but the result. was worse (test perplexity of 110.6) and we did not report it in the table. For the Text8 experiments. (Table 2), the LSTM has two hidden layers with 481 hidden units for a total 46.4 million parameters.. We also tried a one-layer LSTM with 46.4 million parameters but the result was worse (perplexity. of 140.8). The CFN has two hidden layers with 495 units each, for a total of 46.4 million parameters. as well."}] |
HJSCGD9ex | [{"section_index": "0", "section_name": "BEYOND BILINGUAL: MULTI-SENSE WORD EMBED DINGS USING MULTILINGUAL CONTEXT", "section_text": "Word embeddings, which represent a word as a point in a vector space, have become ubiquitous to several NLP tasks. A recent line of work uses bilingual (two languages) corpora to learn a different vector for each sense of a word, by exploiting crosslingual signals to aid sense identification. We present a multi view Bayesian non-parametric algorithm which improves multi-sense word em- beddings by (a) using multilingual (i.e., more than two languages) corpora to sig nificantly improve sense embeddings beyond what one achieves with bilingual in- formation, and (b) uses a principled approach to learn a variable number of senses per word, in a data-driven manner. Ours is the first approach with the ability to leverage multilingual corpora efficiently for multi-sense representation learning Experiments show that multilingual training significantly improves performance over monolingual and bilingual training, by allowing us to combine different par- allel corpora to leverage multilingual context. Multilingual training yields com- parable performance to a state of the art monolingual model trained on five times more training data."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Word embeddings (Turian, Ratinov, and Bengio] 2010] Mikolov, Yih, and Zweig] 2013] inter alia. represent a word as a point in a vector space. This space is able to capture semantic relationships. vectors of words with similar meanings have high cosine similarity. Use of embeddings as features has been shown to benefit several NLP tasks and serve as good initializations for deep architecture. ranging from dependency parsing (Bansal, Gimpel, and Livescu2014) to named entity recognitior Guo et al.2014b).\nAlthough these representations are now ubiquitous in NLP, most algorithms for learning word embeddings do not allow a word to have different meanings in different contexts, a phenomenor known as polysemy. For example, the word bank assumes different meanings in financial (eg. \"bank pays interest') and geographical contexts (eg. \"river bank') and which cannot be represented ad equately with a single embedding vector. Unfortunately, there are no large sense-tagged corpora available and such polysemy must be inferred from the data during the embedding process.\nI got high interest on my Je suis un grand savings from the bank. [interet] sur mes iR[T[F[E] economies de la banque. Mon [interet] My interest lies in JAJ[X]ET History. reside dans Fj o I'Histoire.\nFigure 1: Benefit of Multilingual Information (beyond bilingual): Two different senses of the word \"interest' and their translations to French and Chinese (word translation shown in [bold]). While the surface form of both senses are same in French, they are different in Chinese..\nSeveral attempts (Reisinger and Mooney. 2010f Neelakantan et al. 2014} Li and Jurafsky2015 have been made to infer multi-sense word representations by modeling the sense as a latent variable. in a Bayesian non-parametric framework. These approaches rely on the 'one-sense per collocation\" heuristic (Yarowsky1995), which assumes that presence of nearby words correlate with the sense. of the word of interest. This heuristic provides only a weak signal for sense identification, and such algorithms require large amount of training data to achieve competitive performance.."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "However, bilingual distributional signals often do not suffice. It is common that polysemy for a word. survives translation. Fig.1 shows an illustrative example - both senses of interest get translated. to interet in French. However, this becomes much less likely as the number of languages under. consideration grows. By looking at Chinese translation in Fig.1 we can observe that the senses. translate to different surface forms. Note that the opposite can also happen (i.e. same surface. forms in Chinese, but different in French). Existing crosslingual approaches are inherently bilingual. and cannot naturally extend to include additional languages due to several limitations (details in Sectior4). Furthermore, works like (Suster, Titov, and van Noord2016) sets a fixed number of. senses for each word, leading to inefficient use of parameters, and unnecessary model complexity1.\nThis paper addresses these limitations by proposing a multi-view Bayesian non-parametric wor epresentation learning algorithm which leverages multilingual distributional information. Our rep resentation learning framework is the first multilingual (not bilingual) approach, allowing us to uti ize arbitrarily many languages to disambiguate words in English. To move to multilingual system, i is necessary to ensure that the embeddings of each foreign language are relatable to each other (i.e they live in the same space). We solve this by proposing an algorithm in which word representation are learned jointly across languages, using English as a bridge. While large parallel corpora betwee two languages are scarce, using our approach we can concatenate multiple parallel corpora to obtail a large multilingual corpus. The parameters are estimated in a Bayesian nonparametric framewor. that allows our algorithm to only associate a word with a new sense vector when evidence (fron either same or foreign language context) requires it. As a result, the model infers different numbe of senses for each word in a data-driven manner, avoiding wasting parameters.\nTogether, these two ideas - multilingual distributional information and nonparametric sense mod eling - allow us to disambiguate multiple senses using far less data than is necessary for previous methods. We experimentally demonstrate that our algorithm can achieve competitive performance after training on a small multilingual corpus, comparable to a model trained monolingually on a much larger corpus. We present an analysis discussing the effect of various parameters - choice of language family for deriving the multilingual signal, crosslingual window size etc. and also show qualitative improvement in the embedding space."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "Work on inducing multi-sense embeddings can be divided in two broad categories - two-staged. approaches and joint learning approaches. Two-staged approaches (Reisinger and Mooney2010 Huang et al.]2012) induce multi-sense embeddings by first clustering the contexts and then using. the clustering to obtain the sense vectors. The contexts can be topics induced using latent topic models(Liu, Qiu, and Huang2015} Liu et al.] 2015), or Wikipedia (Wu and Giles2015) or coarse part-of-speech tags (Qiu et al.l2014). A more recent line of work in the two-staged category is that. of retrofitting (Faruqui et al.2015 Jauhar, Dyer, and Hovy2015), which aims to infuse semantic. ontologies from resources like WordNet (Miller1995) and Framenet (Baker, Fillmore, and Lowe.. 1998) into embeddings during a post-processing step. Such resources list (albeit not exhaustively). the senses of a word, and by retro-fitting it is possible to tease apart the different senses of a word. While some resources like WordNet (Miller1995) are available for many languages, they are not exhaustive in listing all possible senses. Indeed, the number senses of a word is highly dependent on the task and cannot be pre-determined using a lexicon (Kilgarriff1997). Ideally, the senses should be inferred in a data-driven manner, so that new senses not listed in such lexicons can be discovered. While recent work has attempted to remedy this by using parallel text for retrofitting. sense-specific embeddings (Ettinger, Resnik, and Carpuat!2016), their procedure requires creation of sense graphs, which introduces additional tuning parameters. On the other hand, our approach. only requires two tuning parameters (prior a and maximum number of senses T)..\nMost words in conventional English are monosemous, i.e. single sense (eg. the word monosemous\nRecently, several approaches (Guo et al.]2014a] Suster, Titov, and van Noord2016) propose to learn multi-sense embeddings by exploiting the fact that different senses of the same word may be translated into different words in a foreign language (Dagan and Itai]1994, Resnik and Yarowsky 1999, Diab and Resnik2002, Ng, Wang, and Chan]2003). For example, bank in English may be translated to banc or banque in French, depending on whether the sense is financial or geographical Such bilingual distributional information allows the model to identify which sense of a word is being used during training.\nIn contrast, joint learning approaches (Neelakantan et al.. 2014]Li and Jurafsky2015) jointly learn the sense clusters and embeddings by using non-parametrics. Our approach belongs to this category. The closest non-parametric approach to ours is that of (Bartunov et al.||2016), who proposed a multi-. sense variant of the skip-gram model which learns the different number of sense vectors for all words. from a large monolingual corpus (eg. English Wikipedia). Our work can be viewed as the multi-view. extension of their model which leverages both monolingual and crosslingual distributional signals for learning the embeddings. In our experiments, we compare our model to monolingually trained. version of their model.\nIncorporating crosslingual distributional information is a popular technique for learning word em-. beddings, and improves performance on several downstream tasks (Faruqui and Dyer. 2014 Guo et al.2016f Upadhyay et al.2016). However, there has been little work on learning multi-sense embeddings using crosslingual signals (Bansal, DeNero, and Lin2012)Guo et al.| 2014a] Suster, Titov, and van Noord 2016) with only (Suster, Titov, and van Noord[2016) being a joint approach. (Kawakami and Dyer 2015) also used bilingual distributional signals in a deep neural architecture to. learn context dependent representations for words, though they do not learn separate sense vectors.."}, {"section_index": "4", "section_name": "3 MODEL DESCRIPTION", "section_text": "Let E = {xi,.., x,., x.} denote the words of the English side and F =. denote the words of the foreign side of the parallel corpus. We assume that we have access t word alignments Ae->f and Af->e mapping words in English sentence to their translation in foreigr. sentence (and vice-versa), so that xe and xf are aligned if Ae->f (xe) = xf. We define Nbr(x, L, d. as the neighborhood in language L of size d (on either side) around word x in its sentence. Th English and foreign neighboring words are denoted by ye and yf , respectively. Note that ye and y. need not be translations of each other. Each word xf in the foreign vocabulary is associated with. a dense vector xf in Rm, and each word xe in English vocabulary admits at most T sense vectors. with the kth sense vector denoted as xAs our main goal is to model multiple senses for words in English, we do not model polysemy in the foreign language and use a single vector to represen. each word in the foreign vocabulary.\nWe model the joint conditional distribution of the context words ye, yf given an English word x and its corresponding translation xf on the parallel corpus:.\nwhere 0 are model parameters (i.e. all embeddings) and a per-prior on latent senses\nP(ye,yf z, B xe.xf,a;0)d\n(1- Bxr), Bxk|'nd' Beta(Bxk|1,), k =1,. P(zx =k|Bx) =xk\nAfter conditioning upon word sense, we decompose the context probability P(ye, yf z, xe, xf ; 0 into two terms, P(ye | xe,xf,z;0)P(yf | xe,xf, z;0). Both the first and the second terms are. sense-dependent, and each factors as,.\nP(y|xe,xf,z=k;0)x(xe,z= Y(x.U expy (x+xt) exn?\n2We also maintain a context vector for each word in the English and Foreign vocabularies. The contex1 vector is used as the representation of the word when it appears as the context for another word.\nP(ye.y] xe,xf: a.\nwhere are the parameters determining the model probability on each sense for xe (i.e., the weight on each possible value for z). We place a Dirichlet process (Ferguson1973) prior on sense assign- ment for each word. Thus, adding the word-x subscript to emphasize that these are word-specific senses,\nThat is, the potentially infinite number of senses for each word x have probability determined by the sequence of independent stick-breaking weights, xk, in the constructive definition of the DP (Sethuraman!|1994). The hyper-prior concentration a provides information on the number of senses we expect to observe in our corpus.\nFigure 2: The aligned pair (interest,interet) is used to predict monolingual and crosslingual context in botl languages (see factors in eqn. (3)). We pick each sense (here 2nd) vector for interest, to perform weighted. update. We only model polysemy in English.\nwhere xf is the embedding corresponding to the kth sense of the word xe, and y is either ye or yf. The factor (xe, z = k, y) use the corresponding sense vector in a skip-gram-like formulation. This results in total of 4 factors,\nLearning. Learning involves maximizing the log-likelihood\nLet q(z,) = q(z)q() where q(z) = I, q(zi) and q() = Iw=1IIk=1 wk be the fully factor ized variational approximation of the true posterior P(z, ye, yf , xe, xf , a), where V is the size. of english vocabulary, and T is the maximum number of senses for any word. The optimizatior problem solves for 0,q(z) and q() using the stochastic variational inference technique (Hoffman e al.2013) similar to (Bartunov et al.]2016) (refer for details).\n00+ptVe Zik logp(y|xi,k, 0 k|Zik>E yEyc\nDisambiguation. Similar to (Bartunov et al.2016), we can disambiguate the sense for the worc xe given a monolingual context ye as follows,"}, {"section_index": "5", "section_name": "MULTILINGUAL EXTENSION", "section_text": "Bilingual distributional signal alone may not be sufficient as polysemy may survive translation in the second language. Unlike existing approaches, we can easily incorporate multilingual distributiona\n4(interest,2,savings) The bank paid me [interest] on my savings. W(interets,savings) 4(interest,2,banque) Ia banque m'a paye des [interets] sur mes economies (interets, economies)\nP(ye,yf |z,xe,xf;0) x (xe,z,y)(xf,yf)(xe,z,yf)W(xf,y)\nTJ P(ye.y'.z. xe.xf,a;0) )d7\nThe resulting learning algorithm is shown as Algorithm[1] The first for-loop (line [1) updates the English sense vectors using the crosslingual and monolingual contexts. First, the expected sense distribution for the current English word w is computed using the current estimate of q() (line[4) The sense distribution is updated (line[7) using the combined monolingual and crosslingual contexts (line[5) and re-normalized (line[8). Using the updated sense distribution q()'s sufficient statistics is re-computed (line 9) and the global parameter 0 is updated (line[10) as follows,\nNote that in the above sum, a sense participates in a update only if its probability exceeds a threshold e(=0.001). The final model retains sense vectors whose sense probability exceeds the same thresh- old. The last for-loop (line 11) jointly optimizes the foreign embeddings using English context with the standard skip-gram updates.\nP(z|xe,y) x P(ye|xe,z;0)> P(zxe,B)q(B\nAlthough the model trains embeddings using both monolingual and crosslingual context. we only use monolingual context at test time. We found that so long as the model has been trained with multilingual context, it performs well in sense disambiguation on new data even if it contains only monolingual context. A similar observation was made by (Suster, Titov, and van Noord2016).\nAlgorithm 1 Psuedocode of Learning Algorithm\nng Agolll Input: parallel corpus E = {xi,, x, ,xNe} and F = {f,, xf,, } and alignments Ae-f and Af->e, Hyper-parameters and T, window sizes d, d' . Output: 0, q(), q(z) 1: for i = 1 to Ne do > update english vectors 2: Wxi 3: for k = 1 to T do 4: ZikEq(w)[logp(Zi = k|,x)] 5: ycNbr(x,E,d) U Nbr(xf,F,d') U{xf} where xf = Aef(x) 6: for y in yc do 7: SENSE-UPDATE(x,Y, Zi) 8: Renormalize zi using softmax 9: Update suff. stats. for q() like (Bartunov et al.2016 10: Update 0 using eq. (4) 11: for i = 1 to N do > jointly update foreign vectors 12: yc Nbr(xf,F,d) U Nbr(x},E,d') U{x} where x, = Afe(xf) 13: for y in yc do 14: SKIP-GRAM-UPDATE(x , y) 15: procedure SENSE-UPDATE(xi,y, Zi) 16: Zik<Zik +logp(y|xi,k,0)\nsignals in our model. For using languages l1 and l2 to learn multi-sense embeddings for English, we. train on a concatenation of En-l1 parallel corpus with an En-l, parallel corpus. This technique can easily be generalized to more than two foreign languages to obtain a large multilingual corpus..\nValue of (ye, xf ). The factor modeling the dependence of the english context word ye on foreigi word xf is crucial to performance when using multiple languages. Consider the case of using Frencl and Spanish contexts to disambiguate the financial sense of the english word bank. In this case, th (financial) sense vector of bank will be used to predict vector of banco (Spanish context) and banqu (French context). If vectors for banco and banque do not reside in the same space or are not close the model will incorrectly assume they are different contexts to introduce a new sense for bank. This is precisely why the bilingual models, like that of (Suster, Titov, and van Noord2016), cannot be extended to multilingual setting, as they pre-train the embeddings of second language before running the multi-sense embedding process. As a result of naive pre-training, the French and Spanish vectors of semantically similar pairs like (banco,banque) will lie in different spaces and need not be close. A similar reason holds for (Guo et al.2014a), as they use a two step approach instead of joint learning\nTo avoid this, the vector for pairs like banco and banque should lie in the same space and close to each other and the sense vector for bank. The (ye, xf ) term attempts to ensure this by using the vector for banco and banque to predict the vector of bank. This way, the model brings the embedding space for Spanish and French closer by using English as a bridge language during joint training. A similar idea of using English as a bridging language was used in the models proposed in (Hermann and Blunsom2014) and (Coulmance et al.[2015). Beside the benefit in the multilingual case, the I(ye, xJ) term improves performance in the bilingual case as well, as it forces the English and second language embeddings to remain close in space.\nTo show the value of (ye, xf) factor in our experiments, we ran a variant of Algorithm[1|without the (ye, xf) factor, by only using monolingual neighborhood Nbr(xf, F) in line |12|of Algo rithm 1] We call this variant ONE-S1DED model and the model in Algorithm|1[the FuLL model.\nParallel Corpora. We use parallel corpora in English (En), French (Fr), Spanish (Es), Russian (Ru) and Chinese (Zh) in our experiments. Corpus statistics for all datasets used in our experiments are shown in Table[1] For En-Zh, we use the FBIS parallel corpus (LDC2003E14). For En-Fr, we use the first 10M lines from the Giga-EnFr corpus released as part of the WMT shared task (Callison- Burch et al.]2011). Note that the domain from which parallel corpus has been derived can affect the final result. To understand what choice of languages provide suitable disambiguation signal,\nTable 1: Corpus Statistics (in millions). Horizontal lines demarcate corpora from the same domain\nit is necessary to control for domain in all parallel corpora. To this end, we also used the En-Fr En-Es, En-Zh and En-Ru sections of the MultiUN parallel corpus (Eisele and Chen!2010). Worc alignments were generated using fast_a1ign tool (Dyer, Chahuneau, and Smith2013) in the symmetric intersection mode. Tokenization and other preprocessing were performed using cdec toolkit. Stanford Segmenter (Tseng et al.]2005) was used to preprocess the chinese corpora.\nWord Sense Induction (WsI). We evaluate our approach on word sense induction task. In this. task, we are given several sentences showing usages of the same word, and are required to cluster all sentences which use the same sense (Nasiruddin]2013). The predicted clustering is then compared. against a provided gold clustering. Note that WSI is a harder task than Word Sense Disambiguation. (WSD)(Navigli||2009), as unlike WSD, this task does not involve any supervision or explicit human. knowledge about senses of words. We use the disambiguation approach in eq. (5) to predict the. sense given the word and four context words..\nTo allow for fair comparison with earlier work, we use the same benchmark datasets as (Bartunov et al.|2016) - Semeval-2007, 2010 and Wikipedia Word Sense Induction (WWsI). We report Adjustec Rand Index (ARI) (Hubert and Arabie||1985) in the experiments, as ARI is a more strict and precise. metric than F-score and V-measure..\nParameter Tuning. For fairness, we used five context words on either side to update each Englisl word-vectors in all the experiments. In the monolingual setting, all five words are English: in the multilingual settings, we used four neighboring English words plus the one foreign word aligned tc the word being updated (d = 4, d' = 0 in Algorithm|1). We also analyze effect of varying d'.\nWe tune the parameters a and T by maximizing the log-likelihood of a held out English text|The parameters were chosen from the following values = {0.05, 0.1, .., 0.25}, T = {5, 10, .., 30T. All models were trained for 10 iteration with a decaying learning rate of O.025, decayed to 0. Unles. otherwise stated, all embeddings are 100 dimensional.\nUnder various choice of a and T, we identify only about 10-20% polysemous words in the vocab. ulary using monolingual training and 20-25% polysemous using multilingual training. It is evident using the non-parametric prior has led to substantially more efficient representation compared to previous methods with fixed number of senses per word.\nWe performed extensive experiments to evaluate the benefit of leveraging bilingual and multilingua information during training. We also analyze how the different choices of language family (i.e using more distant vs more similar languages) affect performance of the embeddings..\nWord Sense Induction Results. The results for WSI are shown in Table 2 MonO refers to the AdaGram model of (Bartunov et al.]2016) trained on the English side of the parallel corpus. In all cases, the MoNO model is outperformed by ONE-SiDED and FuLL models, showing the benefit of using crosslingual signal in training. Best performance is attained by the multilingual model (En-. FrZh), showing value of multilingual signal. The value of (ye, xf ) term is also verified by the fact that the ONE-S1DED model performs worse than the FULL model..\nWe can also compare (unfairly to FulL model) to the best results described in (Bartunov et al. 2016), which achieved ARI scores of 0.069, 0.097 and 0.286 on the three datasets respectively afte\nLines (M) EN-Words (M)\nCorpus Source Lines (M) EN-Words (M) En-Fr Canadian & EU proc. ~ 10 250 En-Zh FBIS news ~ 9.5 286 En-Es UN proc. ~ 10 270 En-Fr UN proc. ~ 10 260 En-Zh UN proc. ~ 8 230 En-Ru UN proc. ~ 10 270\nEn-Fr MONO .044 .064 .112 .073 41.1 ONE-SIDED .054 .074 .116 .081 41.9 FULL .055 .086 .105 .082 41.8 En-Zh MONO .054 .074 .073 .067 42.6 ONE-SIDED .059 .084 .078 .074 45.0 FULL .055 .090 .079 .075 41.7 En-FrZh MONO .056 .086 .103 .082 47.3 ONE-SIDED .067 .085 .113 .088 44.6 FULL .065 .094 .120 .093 41.9\nTable 2: Results on word sense induction (left four columns) in ARI and contextual word similarity (last column) in percent correlation. Language pairs are separated by horizontal lines. Best results shown in bold..\nTable 3: Effect (in ARI) of language family distance on WsI task. Best results for each column is shown in bold. The improvement from Mono to FuLL is also shown as (3) - (1). Note that this is not comparable to. results in Table2 as we use a different training corpus to control for the domain..\ntraining 300 dimensional embeddings on English Wikipedia (~ 100M lines). Note that, as WwS). was derived from Wikipedia, training on Wikipedia gives AdaGram model an undue advantage, re-. sulting in high ARI score on WWSI. Nevertheless, even in the unfair comparison, it noteworthy tha. on S-2007 and S-2010, we can achieve comparable performance (0.067 and 0.094) with multilingual training to a mode1 trained on almost 5 times more data and higher (300) dimensional embeddings\nContextual Word Similarity Results. For completeness, we report correlation scores on Stan. ford contextual word similarity dataset (SCwS) (Huang et al.. 2012) in Table[2The task requires computing similarity between two words given their contexts. While the bilingually trained model outperforms the monolingually trained model, surprisingly the multilingually trained model does. not perform well on SCWs. We believe this may be due to our parameter tuning strategy5.\nFrom Table [3] we see that using Non-Indo European languages yield a slightly higher average im provement in WSI task than using Indo-European languages. This suggests that using languages from a distance family aids better disambiguation. Our findings echo those of (Resnik and Yarowsky 1999), who found that the tendency to lexicalize different senses of an English word differently in a second language correlated with language distance.\nEffect of Window Size. Figure[3d shows the effect of increasing the crosslingual window (d') o. the average ARI on the WSI task for the En-Fr and En-Zh models. While increasing the windov size improves the average score for En-Zh model, the score for the En-Fr model goes down. Thi suggests that it might be beneficial to have a separate window parameter per language. This alsc\n5Most works tune directly on the test dataset for Word Similarity tasks (Faruqui et al.2016] 6 (Suster, Titov, and van Noord2016) compared different languages but did not control for domair\nS-2007 S-2010 WWSI avg. ARI SCWS\nTrain S-2007 S-2010 WWSI Avg. ARI Setting En-FrEs En-RuZh En-FrEs En-FrEs En-FrEs En-RuZh En-FrEs En-RuZh (1) MOnO .035 .033 .046 .049 .054 .049 .045 .044 (2) ONE-SIDED .044 .044 .055 .063 .062 .057 .054 .055 (3) FULL .046 .040 .056 .070 .068 .069 .057 .059 (3) - (1) .011 .007 .010 .021 .014 .020 .012 .015\nEffect of Language Family Distance. Intuitively, choice of language can affect the result from. crosslingual training as some languages may provide better disambiguation signals than others. We. performed a systematic set of experiment to evaluate whether we should choose languages from a closer family (Indo-European languages) or farther family (Non-Indo European Languages) as. training data alongside English6 To control for domain here we use the MultiUN corpus. We. use En paired with Fr and Es as Indo-European languages, and English paired with Ru and Zh for. representing Non-Indo-European languages.\naligns with the observation earlier that different language families have different suitability (bigger. crosslingual context from a distant family helped) and requirements for optimal performance.\nQualitative Illustration. As an illustration for the effects of multilingual training, Figure 3|shows. PCA plots for 11 sense vectors for 9 words using monolingual, bilingual and multilingual models From Fig3a we note that with monolingual training the senses are poorly separated. Although the. model infers two senses for bank, the two senses of bank are close to financial terms, suggesting their distinction was not recognized. The same can be observed about apple. In Fig|3b] with bilin gual training, the model infers two senses of bank correctly, and two sense of apple become more distant. The model can still improve eg. pulling interest towards the financial sense of bank, and. pulling itunes towards apple _2. Finally, in Fig[3c] all senses of the words are more clearly clustered improving over the clustering of Fig|3b] The senses of apple, interest, and bank are well separated and are close to sense-specific words, showing the benefit of multilingual training.."}, {"section_index": "6", "section_name": "7 CONCLUSION", "section_text": "We presented a multi-view, non-parametric word representation learning algorithm which can lever. age multilingual distributional information. Our approach effectively combines the benefits o1. crosslingual training and Bayesian non-parametrics. Ours is the first multi-sense representatior learning algorithm capable of using multilingual distributional information efficiently, by combin. ng several parallel corpora to obtained a large multilingual corpus. Our experiments show how thi. multi-view approach learns high-quality embeddings using substantially less data and parameters. than prior state-of-the-art. While we focused on improving the embedding of English words here the same algorithm could learn better multi-sense embedding for Chinese, for instance. Exciting avenues for future research include extending our approach to model polysemy in foreign language. The sense vectors can then be aligned across languages (thanks to our joint training paradigm), tc. generate a multilingual Wordnet like resource, in a completely unsupervised manner..\n0.8 0.6 interest 2 bank_1 0.6 .desire 0.4 monetary 0.4 potato bank_2 0.2 .apple 1 banterest_1 .west itunes 0.2 .apple 2 0.0 .apple_2 0.0 .west 0.2 interest 1 meantary 0.2 0.4 -0.4 itunes ap601 -0.6 0.6 intereeti?e 0.8 0.8 80.6 0.8 0.4 0.2 0.0 0.2 0.4 0.6 0.8 -0.6 0.4 -0.2 0.0 0.2 0.4 0.6 0.8 (a) Monolingual (En side of En-Zh) (b) Bilingual (En-Zh) 10-2 0.8 8.5 0.6 interest_2 desire 0.4 apple_1 8 potato 0.2 ANI .bank1 .monetary 0.0 7.5 interest 1 0.2 .apple_2 En-Fr itunes 7 0.4 En-Zh bARt 2 0.6 0 1 2 3 4 0.80.8 0.6 0.4 0.2 0.0 0.2 0.4 0.6 window (c) Multilingual (En-FrZh) (d) Window size v.s. avg. ARI\nFigure 3: Qualitative: PCA plots for the vectors for apple, bank, interest, itunes, potato, west, monetary desire} with multiple sense vectors for apple,interest and bank obtained using monolingual (3a), bilingual (3b. and multilingual (3c) training. Window Tuning: Figure[3d shows tuning window size for En-Zh and En-Fr."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Baker, C. F.; Fillmore, C. J.; and Lowe, J. B. 1998. The berkeley framenet project. In ACL\nBansal, M.; DeNero, J.; and Lin, D. 2012. Unsupervised translation sense clustering. In NAACL\nBansal, M.; Gimpel, K.; and Livescu, K. 2014. Tailoring continuous word representations for dependency parsing. In ACL.\nBartunov, S.; Kondrashkin, D.; Osokin, A.; and Vetrov, D. 2016. Breaking sticks and ambiguities with adaptiv skip-gram. AISTATS.\nDagan, I., and Itai, A. 1994. Word sense disambiguation using a second language monolingual corpus. Com putational linguistics.\nEisele, A., and Chen, Y. 2010. MultiUN: A multilingual corpus from united nation documents. In LREC.\nEttinger, A.; Resnik, P.; and Carpuat, M. 2016. Retrofitting sense-specific word vectors using parallel text. In NAACL.\nFaruqui, M.; Dodge, J.; Jauhar, S. K.; Dyer, C.; Hovy, E.; and Smith, N. A. 2015. Retrofitting word vectors t semantic lexicons. In NAACL.\nFaruqui, M.; Tsvetkov, Y.; Rastogi, P.; and Dyer, C. 2016. Problems with evaluation of word embeddings usin word similarity tasks. In 1st RepEval Workshop.\nFerguson, T. S. 1973. A bayesian analysis of some nonparametric problems. The annals of statistics\nGuo, J.; Che, W.; Wang, H.; and Liu, T. 2014b. Revisiting embedding features for simple semi-supervisec learning. In EMNLP.\nHermann, K. M., and Blunsom, P. 2014. Multilingual Distributed Representations without Word Alignmen In ICLR.\nHoffman, M. D.; Blei, D. M.; Wang, C.; and Paisley, J. W. 2013. Stochastic variational inference. JMLR.\nHuang, E. H.; Socher, R.; Manning, C. D.; and Ng, A. Y. 2012. Improving word representations via global context and multiple word prototypes. In ACL\nHubert. L., and Arabie, P. 1985. Comparing partitions. Journal of classification\nJauhar, S. K.; Dyer, C.; and Hovy, E. 2015. Ontologically grounded multi-sense representation learning for semantic vector space models. In NAACL\nKilgarriff, A. 1997. I don't believe in word senses. Computers and the Humanities\nKawakami, K., and Dyer, C. 2015. Learning to represent words in context with multilingual supervision. ICLR Workshop.\nKoehn, P. 2005. Europarl: A parallel corpus for statistical machine translation. In MT summit, volume 5 79-86.\nLi, J., and Jurafsky, D. 2015. Do multi-sense embeddings improve natural language understanding? EMNLP\nLiu, Y.; Liu, Z.; Chua, T.-S.; and Sun, M. 2015. Topical word embeddings. In AAAI.\nLuong, T.; Pham, H.; and Manning, C. D. 2015. Bilingual word representations with monolingual quality ii mind. In Workshop on Vector Space Modeling for NLP.\nMiller, G. A. 1995. Wordnet: a lexical database for english. Communications of the ACM\nNavigli, R. 2009. Word sense disambiguation: A survey. ACM Computing Surveys (CSUR)\nNeelakantan, A.; Shankar, J.; Passos, A.; and McCallum, A. 2014. Efficient non-parametric estimation of multiple embeddings per word in vector space. In EMNLP..\nNg, H. T.; Wang, B.; and Chan, Y. S. 2003. Exploiting parallel texts for word sense disambiguation: A empirical study. In ACL.\nQiu, L.; Cao, Y.; Nie, Z.; Yu, Y.; and Rui, Y. 2014. Learning word representation considering proximity an ambiguity. In AAAI.\nReisinger, J., and Mooney, R. J. 2010. Multi-prototype vector-space models of word meaning. In NAACL\nResnik, P., and Yarowsky, D. 1999. Distinguishing systems and distinguishing senses: New evaluation methods for word sense disambiguation. NLE.\nSethuraman, J. 1994. A constructive definition of dirichlet priors. Statistica sinica.\nTseng, H.; Chang, P.; Andrew, G.; Jurafsky, D.; and Manning, C. 20o5. A conditional random field word segmenter for sighan bakeoff 2005. In Proc. of SIGHAN.\nTurian, J.; Ratinov, L.; and Bengio, Y. 2010. Word representations: a simple and general method for sem supervised learning. In ACL.\nUpadhyay, S.; Faruqui, M.; Dyer, C.; and Roth, D. 2016. Cross-lingual models of word embeddings: Al empirical comparison. In ACL.\nSuster, S.; Titov, I.; and van Noord, G. 2016. Bilingual learning of multi-sense embeddings with discret autoencoders. In NAACL.\nYarowsky, D. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In ACL\nLiu, P.; Qiu, X.; and Huang, X. 2015. Learning context-sensitive word embeddings with neural tensor skip gram model. In IJCAI.\nMikolov, T.; Yih, W.-t.; and Zweig, G. 2013. Linguistic regularities in continuous space word representations In NAACL.\nNasiruddin, M. 2013. A state of the art of word sense induction: A way towards word sense disambiguation for under-resourced languages. arXiv preprint arXiv:1310.1425."}] |
Bk0MRI5lg | [{"section_index": "0", "section_name": "BRIDGING NONLINEARITIES AND STOCHASTIC REGULARIZERS WITH GAUSSIAN ERROR LINEAR UNITS", "section_text": "Dan Hendryck.\nKevin Gimpel\nWe propose the Gaussian Error Linear Unit (GELU), a high-performing neural network activation function. The GELU nonlinearity is the expected transforma- tion of a stochastic regularizer which randomly applies the identity or zero map to a neuron's input. This stochastic regularizer is comparable to nonlinearities aided by dropout, but it removes the need for a traditional nonlinearity. The connec- tion between the GELU and the stochastic regularizer suggests a new probabilis- tic understanding of nonlinearities. We perform an empirical evaluation of the GELU nonlinearity against the ReLU and ELU activations and find performance improvements across all tasks."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Early artificial neurons utilized binary threshold units (Hopfield] 1982} McCulloch & Pitts1943) These hard binary decisions are smoothed with sigmoid activations, enabling a neuron to have a \"fir- ing rate\" interpretation and to train with backpropagation. But as networks became deeper, training with sigmoid activations proved less effective than the non-smooth, less-probabilistic ReLU (Nair & Hinton2010) which makes hard gating decisions based upon an input's sign. Despite having less of a statistical motivation, the ReLU remains a competitive engineering solution which often enables faster and better convergence than sigmoids. Building on the successes of ReLUs, a recent modifi-. cation called ELUs (Clevert et al.]2016) allows a ReLU-like nonlinearity to output negative values which sometimes increases training speed. In all, the activation choice has remained a necessary. architecture decision for neural networks lest the network be a deep linear classifier..\nDeep nonlinear classifiers can fit their data so well that network designers are often faced with the. choice of including stochastic regularizer like adding noise to hidden layers or applying dropout (Sri. vastava et al.2014), and this choice remains separate from the activation function. Some stochasti. regularizers can make the network behave like an ensemble of networks, a pseudoensemble (Bach. man et al.|2014), and can lead to marked accuracy increases. For example, the stochastic regular. izer dropout creates a pseudoensemble by randomly altering some activation decisions through zer. multiplication. Nonlinearities and dropout thus determine a neuron's output together, yet the twc. innovations have remained distinct. More, neither subsumed the other because popular stochasti. regularizers act irrespectively of the input and nonlinearities are aided by such regularizers\nIn this work, we bridge the gap between stochastic regularizers and nonlinearities. To do this, we consider an adaptive stochastic regularizer that allows for a more probabilistic view of a neuron's output. With this stochastic regularizer we can train networks without any nonlinearity while match ing the performance of activations combined with dropout. This is unlike other stochastic regular izers without any nonlinearity as they merely yield a regularized linear classifier. We also take the expected transformation of this stochastic regularizer to obtain a novel nonlinearity which matche. or exceeds models with ReLUs or ELUs across tasks from computer vision, natural language pro cessing, and automatic speech recognition.\n*Work done while the author was at TTIC. Code available at github.com/hendrycks/GELUs\nToyota Technological Institute at Chicago kgimpel@ttic.edu"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "The SOI map can be made determinis. tic should we desire a deterministic de- cision from a neural network. and this. gives rise to our new nonlinearity. The. nonlinearity is the expected transforma-. tion of the SOI map on an input x,. whichis (x) Ix+(1-(x)) 0x = x(x). Loosely, this expression states. that we scale x by how much greater it. is than other inputs. We now make an. obvious extension. Since the cumula- tive distribution function of a Gaussian. is computed with the error function, we. define the Gaussian Error Linear Unit (GELU) as\nwhere X ~ N(, o2). Both and o are possibly parameters to optimize, but throughout this work. we simply let = 0 and o = 1. Consequently, we do not introduce any new hyperparameters in the following experiments. In the next section, we show that the GELU exceeds the performance of. ReLUs and ELUs across numerous tasks.\nWe evaluate the GELU, ELU, and ReLU on MNIST classification (grayscale images with 10 classes. 60k training examples and 10k test examples), MNIST autoencoding, Tweet part-of-speech tagging. (1000 training, 327 validation, and 500 testing tweets), TIMIT frame recognition (3696 training, 1152 validation, and 192 test audio sentences), and CIFAR-10/100 classification (color images with 10/100 classes, 50k training and 10k test examples). We do not evaluate nonlinearities like the LReLU because of its similarity to ReLUs (see[Maas et al.(2013) for a description of LReLUs).."}, {"section_index": "3", "section_name": "3.1 MNIST CLASSIFICATION", "section_text": "Let us verify that this nonlinearity competes with previous activation functions by replicating an experiment from|Clevert et al.(2016). To this end, we train a fully connected neural network with GELUs ( = 0, = 1), ReLUs, and ELUs (a = 1). Each 8-layer, 128 neuron wide neural\nWe create our stochastic regularizer and nonlinearity by combining intuitions from dropout, zo neout, and ReLUs. First note that a ReLU and dropout both yield a neuron's output with the ReLU deterministically multiplying the input by zero or one and dropout stochastically multi- plying by zero. Also, a new RNN regularizer called zoneout stochastically multiplies inputs by one (Krueger et al.]2016).We merge this functionality by multiplying the input by zero or one, but the values of this zero-one mask are stochastically determined while also dependent upon the input. Specifically, we multiply the neuron input x by m ~ Bernoulli((x)), where (x) = P(X < x),X ~ N(0,1) is the cumulative distribution function of the standard nor- mal distribution. The distribution Bernoulli((x)) appears in Gaussian Processes for classification (Houlsby et al.]2011) and the neuron's output is xm giving x or 0. Thus inputs have a higher probability of being \"dropped' as x decreases, so the transformation applied to x is stochastic yet depends upon the input. Masking inputs in this fashion retains nondeterminism but maintains depen- dency upon the input value. A stochastically chosen mask amounts to a stochastic zero or identity transformation of the input, leading us to call the regularizer the SOI map. The SOI Map is much like Adaptive Dropout (Ba & Freyl2013), but we refer to the regularizer as the SOI Map because adaptive dropout is used in tandem with nonlinearities. In section 4] we show that simply mask ing linear transformations with the SOI map exceeds the power of linear classifiers and competes with nonlinearities aided by dropout, showing that nonlinearities can be replaced with stochastic regularizers.\n3 GELU ReLU ELU 2 1 0 1 -4 -3 -2 -1 0 1 2 3\nFigure 1: The GELU ( = 0, = 1), ReLU, and ELU (Q = 1).\n0.5 0.14 0.12 0.4 ras = ttes) 0.10 0.3 ou) 607 0.2 ss07 607 0.04 0.1 0.02 GELU GELU ELU ELU ReLU ReLU 0.00 0 10 20 30 40 50 0 10 20 30 40 50 Epoch Epoch\n0.14 0.12 0.4 (oe = eeee dane anrnrae) eee bo 0.3 0.08 ou) 60 0.04 0.1 0.02 GELU GELU ELU ELU ReLU ReLU 0.00 0 10 20 30 40 50 0 10 20 30 40 50 Epoch Epoch\n0.12 (gnodoup ou) sso7 607 0.10 0.08 0.06 0.04 - 0.02 0.00 0\nFigure 2: MNIST Classification Results. Left are the loss curves without dropout, and right are curves with a dropout rate of 0.5. Each curve is the the median of five runs. Training set log losses are the darker. lower curves. and the fainter. upper curves are the validation set log loss curves\nFigure 3: MNIST Robustness Results. Using different nonlinearities, we record the test set accuracy decline and log loss increase as inputs are noised. The MNIST classifier trained without dropou1 received inputs with uniform noise Unif[a, a] added to each example at different levels a, where a = 3 is the greatest noise strength. Here GELUs display robustness matching or exceeding ELUs and ReLUs.\nnetwork is trained for 50 epochs with a batch size of 128. This experiment differs from those of. Clevert et al. in that we use the Adam optimizer (Kingma & Ba]2015) rather than stochastic gra. dient descent without momentum, and we also show how well nonlinearities cope with dropout. Weights are initialized with unit norm rows, as this has positive impact on each nonlinearity's per. formance (Hendrycks & Gimpel]2016] Mishkin & Matas2016) Saxe et al.]2014). Note that we tune over the learning rates {10-3, 10-4, 10-5} with 5k validation examples from the training se1 and take the median results for five runs. Using these classifiers, we demonstrate in Figure|3|tha classifiers using a GELU can be more robust to noised inputs. Figure[2|shows that the GELU tends to have the lowest median training log loss with and without dropout. Consequently, although the. GELU is inspired by a different stochastic process, it comports well with dropout.."}, {"section_index": "4", "section_name": "3.2 MNIST AUTOENCODER", "section_text": "We now consider a self-supervised setting and train a deep autoencoder on MNIST (Desjardins et al. 2015). To accomplish this, we use a network with layers of width 1000, 500, 250, 30, 250, 500, 1000. in that order. We again use the Adam optimizer and a batch size of 64. Our loss is the mean squared loss. We vary the learning rate from 10-3 to 10-5. We also tried a learning rate of 0.01 but ELUs diverged, and GELUs and RELUs converged poorly. The results in Figure 4|indicate the GELU accommodates different learning rates and that the GELU either ties or significantly outperforms\n1.0 25 GELU GELU ELU ELU 0.9 ReLU ReLU 20 0.8 - sso7 Aecnnre 0.7 15 607 0.6 sett set 10 0.5 fest 0.4 5 0.3 - 0.2 0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Noise Strength Noise Strength\n0.016 0.016 GELU GELU ELU ELU ReLU ReLU 0.014 0.014 - II II 1 0.010 0.010 - 6 0.008 0.004 0.004 - 0 50 100 150 200 250 50 100 150 200 250 Epoch Epoch\nGELO GELO ELU ELU ReLU 0.014 ReLU 0.014 e- 0.012 II II 0.010 6 0.004 0.004 0 50 100 150 200 250 0 50 100 150 200 250 Epoch Epoch\nFigure 4: MNIST Autoencoding Results. Each curve is the median of three runs. Left are los. curves for a learning rate of 10-3, and the right figure is for a 10-4 learning rate. Light, thin curves correspond to test set log losses\n1.8 GELU ELU 1.7 ReLU 1.6 - SS 07 607 1.5 1.4 1.3 - 0 5 10 15 20 25 30 Epoch\nFigure 5: TIMIT Frame Classification. Learning curves show training set convergence, and the lighter curves show the validation set convergence..\nthe other nonlinearities. To save space, we show the learning curve for the 10-5 learning rate ir appendixA\nMany datasets in natural language processing are relatively small, so it is important that an activatior. generalize well from few examples. To meet this challenge we compare the nonlinearities on POS annotated tweets (Gimpel et al.]2011) Owoputi et al.2013) which contain 25 tags.The twee tagger is simply a two-layer network with pretrained word vectors trained on a corpus of 56 millior tweets (Owoputi et al.]2013). The input is the concatenation of the vector of the word to be taggec. and those of its left and right neighboring words. Each layer has 256 neurons, a dropout keep. probability of O.8, and the network is optimized with Adam while tuning over the learning rates. {10-3, 10-4, 10-5}. We train each network five times per learning rate, and the median test sel. error is 12.57% for the GELU, 12.67% for the ReLU, and 12.91% for the ELU.\nELU 1.7 ReLU 1.6 - 1.5 M 1.4 1.3 0 5 10 15 20 25 30 Epoch\n10 8 6 4 2 GELU ELU ReLU 0 0 25 50 75 100 125 150 175 200 Epoch\nFigure 6: CIFAR-10 Results. Each curve is the median of three runs. Learning curves show training set error rates, and the lighter curves show the test set error rates.."}, {"section_index": "5", "section_name": "3.4 TIMIT FRAME CLASSIFICATION", "section_text": "Our next challenge is phone recognition with the TIMIT dataset which has recordings of 680. speakers in a noiseless environment. The system is a five-layer, 2048-neuron wide classifier as in (Mohamed et al.]2012) with 39 output phone labels and a dropout rate of 0.5 as in (Srivas-. tava2013). This network takes as input 11 frames and must predict the phone of the center. frame using 26 MFCC, energy, and derivative features per frame. We tune over the learning rates {10-3,10-4,10-5} and optimize with Adam. After five runs per setting, we obtain the median. curves in Figure 5] and median test error chosen at the lowest validation error is 29.3% for the. GELU, 29.5% for the ReLU, and 29.6% for the ELU."}, {"section_index": "6", "section_name": "3.5 CIFAR-10/1O0 CLASSIFICATION", "section_text": "Next. we demonstrate that for more intricate architectures the GELU nonlinearity again outperform. other nonlinearities. We evaluate this activation function using CIFAR-1O and CIFAR-100 dataset: (Krizhevsky2009) on shallow and deep convolutional neural networks, respectively\nOur shallower convolutional neural network is a 9-layer network with the architecture and training procedure from Salimans & Kingma (2016) while using batch normalization to speed up training The architecture is described in appendix B|and recently obtained state of the art on CIFAR-10 without data augmentation. No data augmentation was used to train this network. We tune over the learning initial rates {10-3, 10-4, 10-5} with 5k validation examples then train on the whole training set again based upon the learning rate from cross validation. The network is optimized with Adam for 200 epochs, and at the 100th epoch the learning rate linearly decays to zero. Results are shown in Figure|6J and each curve is a median of three runs. Ultimately, the GELU obtains a median error rate of 7.89%, the ReLU obtains 8.16%, and the ELU obtains 8.41%.\nNext we consider a wide residual network on CIFAR-100 with 40 layers and a widening factor of 4. (Zagoruyko & Komodakis2016). We train for 50 epochs with the learning rate schedule described in (Loshchilov & Hutter2016) (To = 50, n = 0.1) with Nesterov momentum, and with a dropout keep probability of O.7. Some have noted that ELUs have an exploding gradient with residual. networks (Shah et al.] 2016), and this is alleviated with batch normalization at the end of a residual block. Consequently, we use a Conv-Activation-Conv-Activation-BatchNorm block architecture. to be charitable to ELUs. Over three runs we obtain the median convergence curves in Figure7 Meanwhile, the GELU achieves a median error of 20.74%, the ReLU obtains 21.77% (without our changes described above, the original 40-4 WideResNet with a ReLU obtains 22.89% (Zagoruyko. & Komodakis2016), and the ELU obtains 22.98%\n3.0 GELU ELU 2.5 - ReLU sso7 2.0 607 1.5 1.0 0.5 1 0 10 20 30 40 50 Epoch\nFigure 7: CIFAR-100 Wide Residual Network Results. Learning curves show training set conver- gence with dropout on, and the lighter curves show the test set convergence with dropout off.."}, {"section_index": "7", "section_name": "4 SOL MAP EXPERIMENTS", "section_text": "Now let us consider how well the SOI Map performs rather than the GELU, its expectation. We consider evaluating the SOI Map, or an Adaptive Dropout variant without any nonlinearity, to show that neural networks do not require traditional nonlinearities. We can expect the SOI map to perform differently from a nonlinearity plus dropout. For one, stochastic regularizers applied to composed linear maps without a deterministic nonlinearity tend to yield a regularized deep linear transforma tion. In the case of a single linear transformation dropout and the SOI map behave differently. Tc see this, recall thatWang & Manning(2013) showed that for least squares regression, if a prediction is Y = , w,x;m, where x is an zero-centered input, w is a zero-centered learned weight, and m is a dropout mask of zeros and ones, we have that Var(Y) = , w?x?p(1 - p) when using dropout Meanwhile, the SOI map has the prediction variance , w?x? (x)(1- (x)). Thus as x; increases. the variance of the prediction increases for dropout, but for the SOI map x;'s increase is dampened by the (x)(1 (x)) term. Then as the inputs and score gets larger, a prediction with the SOI map can have less volatility rather than more. In the experiments that follow, we confirm that the SO] map and dropout differ because the SOI map yields accuracies comparable to nonlinearities plus dropout, despite the absence of any traditional nonlinearity.\nFinally, we turn to an earlier TIMlT experiment. Like the previous two experiments, we also tune over the dropout keep probabilities {1, 0.75, 0.5} when using a nonlinearity. Under this setup, the. ReLU ties with the SOI map as both obtain 29.46% error, though the SOI map obtained its bes1. validation loss in the 7th epoch while the ReLU with dropout did in the 27th epoch..\nWe begin our experimentation by reconsidering the 8-layer MNIST classifier. We have the same. training procedure except that we tune the dropout keep probability over {1, 0.75, 0.5} when using. a nonlinearity. There is no dropout while using the SOI map. Meanwhile, for the SOI map we tune no additional hyperparameter. When the SOI map trains we simply mask the neurons, but during. testing we use the expected transformation of the SOI map (the GELU) to make the prediction. deterministic, mirroring how dropout is turned off during testing. A ReLU with dropout obtains. 2.10% error, and a SOI map achieves 2.00% error.\nNext, we reconsider the Twitter POS tagger. We again perform the same experimentation but also tune over the dropout keep probabilities {1, 0.75, 0.5} when using a nonlinearity. In this experiment, the ReLU with dropout obtains 11.9% error, and the SOI map obtains 12.5% error. It is worth mentioning that the best dropout setting for the ReLU was when the dropout keep probability was 1, i.e., when dropout was off, so the regularization provided by the SOI map was superfluous.\n1.0 Gaussian CDF 3.0 GELU Logistic Sigmoid SiLU ReLU 2.5 0.8 2.0 - 0.6 1.5 0.4 1.0 0.5 0.2 0.0 0.0 -4 -2 0 2 4 -3 -2 -1 0 1 2 3\nFigure 8: Although a logistic sigmoid function approximates a Gaussian CDF, the difference is still conspicuous and is not a suitable approximation.\nIn summary, the SOI map can be comparable to a nonlinearity with dropout and does not simply yield a regularized linear transformation. This is surprising because the SOI map is not like a traditional nonlinearity while it has a nonlinearity's power. The upshot may be that traditional, deterministic, differentiable functions applied to a neuron's input are less essential to the success of neural networks, since a stochastic regularizer can achieve comparable performance."}, {"section_index": "8", "section_name": "DISCUSSION", "section_text": "Across several experiments, the GELU outperformed previous nonlinearities, but it bears semblance to the ReLU and ELU in other respects. For example, as -> 0 and if = 0, the GELU becomes a ReLU. More, the ReLU and GELU are equal asymptotically. In fact, the GELU can be viewed as a natural way to smooth a ReLU. To see this, recall that ReLU = max(x, O) = x(x > 0) (where 1 is the indicator function), while the GELU is x(x) if = 0, = 1. Then the CDF is a smooth approximation to the binary function the ReLU uses, like how the sigmoid smoothed binary threshold activations. Unlike the ReLU, the GELU and ELU can be both negative and positive. Ir fact, if we used the cumulative distribution function of the standard Cauchy distribution, then the ELU (when = 1/) is asymptotically equal to xP(C x),C ~ Cauchy(0,1) for negative values and for positive values is xP(C x) if we shift the line down by 1/. These are some fundamental relations to previous nonlinearities.\nHowever, the GELU has several notable differences. This non-convex, non-monotonic function is. not linear in the positive domain and exhibits curvature at all points. Meanwhile ReLUs and ELUs. which are convex and monotonic activations, are linear in the positive domain and thereby can lack. curvature. As such, increased curvature and non-monotonicity may allow GELUs to more easily approximate complicated functions than can ReLUs or ELUs. Also, since ReLU(x) = x1(x > 0) and GELU(x) = x(x) if = 0, = 1, we can see that the ReLU gates the input depending upon its sign, while the GELU weights its input depending upon how much greater it is than other inputs.. In addition and significantly, the GELU has a probabilistic interpretation given that it is the expected. SOI map, which combines ideas from dropout and zoneout..\nThe SOI Map also relates to a previous stochastic regularizer called Adaptive Dropout (Ba & Frey. 2013). The crucial difference between typical adaptive dropout and the SOI map is that adaptive dropout multiplies the nonlinearity's output by a mask, but the SOI map multiplies the neuron input by a mask. Consequently, the SOI map trains without any nonlinearity, while adaptive dropout. modifies the output of a nonlinearity. In this way, standard implementations of adaptive dropout do. not call into question the necessity of traditional nonlinearities since it augments a nonlinearity's. decision rather than eschews the nonlinearity entirely..\nWe also have two practical tips for using the GELU. First we advise using an optimizer with mo. mentum when training with a GELU, as is standard for deep neural networks. Second, using a close approximation to the cumulative distribution function of a Gaussian distribution is important. For.\nexample, using a sigmoid function o(x) = 1/(1 + e-x) is an approximation of a cumulative dis- tribution function of a normal distribution, but it is not a close enough approximation (Ba & Frey 2013). Indeed, we found that a Sigmoid Linear Unit (SiLU) xo(x) performs worse than GELUs but. usually better than ReLUs and ELUs. The maximum difference between o(x) and (x) is approx-. imately O.1, but the difference between the two is visible in Figure[8 Instead of using a xo(x) tc approximate (x), we used 0.5x(1+ tanh[2/(x +0.044715x3)) (Choudhury|2014)|1This is a sufficiently fast, easy-to-implement approximation which we used in every experiment in this paper"}, {"section_index": "9", "section_name": "6 CONCLUSION", "section_text": "We observed that the GELU outperforms previous nonlinearities across tasks from computer vi sion, natural language processing, and automatic speech recognition. Moreover, we showed that stochastic regularizer can compete with a nonlinearity aided by dropout, indicating that traditiona nonlinearities may not be crucial to neural network architectures. This stochastic regularizer make probabilistic decisions and the GELU is the expectation of the decision. We therefore probabilisti cally related the GELU to the SOI map, thereby bridging a nonlinearity to a stochastic regularizer Now having seen that a stochastic regularizer can replace a traditional nonlinearity, we hope tha future work explores the design space of other stochastic regularizers as powerful as a traditional ac tivation aided by dropout. Furthermore, there may be fruitful modifications to the GELU in differen contexts. For example, for sparser inputs, a nonlinearity of the form xP(L x), L ~ Laplace(0, 1 may be a more effective activation. For the numerous datasets evaluated in this paper, the GELl exceeded the accuracy of the ELU and ReLU consistently, making it a viable alternative to previou nonlinearities."}, {"section_index": "10", "section_name": "ACKNOWLEDGMENT", "section_text": "We would like to thank NVIDIA Corporation for donating several TITAN X GPUs used in this research."}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Jimmy Ba and Brendan Frey. Adaptive dropout for training deep neural networks. In Neural Infor mation Processing Systems, 2013.\nAmit Choudhury. A simple approximation to the area under standard normal curve. In Mathematics and Statistics, 2014.\nDan Hendrycks and Kevin Gimpel. Adjusting for dropout variance in batch normalization anc weight initialization. In arXiv, 2016\nJohn Hopfield. Neural networks and physical systems with emergent collective computational abil ities. In Proceedings of the National Academy of Sciences of the USA, 1982.\nThank you to Dmytro Mishkin for bringing an approximation like this to our attention\nDiederik Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. Internationa Conference for Learning Representations, 2015.\nAlex Krizhevsky. Learning Multiple Layers of Features from Tiny Images, 2009\nIlya Loshchilov and Frank Hutter. SGDR: Stochastic gradient descent with restarts. arXiv, 2016\nAndrew L. Maas. Awni Y. Hannun. . and Andrew Y. Ng. Rectifier nonlinearities improve neura network acoustic models. In International Conference on Machine Learning. 2013.\nAbdelrahman Mohamed, George E. Dahl, and Geoffrey E. Hinton. Acoustic modeling using deep belief networks. In IEEE Transactions on Audio, Speech, and Language Processing, 2012.\nTim Salimans and Diederik P. Kingma. Weight normalization: A simple reparameterization tc accelerate training of deep neural networks. In Neural Information Processing Svstems. 2016\nAndrew M. Saxe, James L. McClelland, and Surya Ganguli. Exact solutions to the nonlinear dy namics of learning in deep linear neural networks. In International Conference on Learning Representations, 2014.\nAnish Shah, Sameer Shinde, Eashan Kadam, Hena Shah, and Sandip Shingade. Deep residua networks with exponential linear unit. In Vision Net, 2016..\nNitish Srivastava. Improving neural networks with dropout. In University of Toronto, 2013\nSergey Zagoruyko and Nikos Komodakis. Wide residual networks. British Machine Vision Confer. ence, 2016.\n0.016 GELU 0.015 ELU ReLU 0.014 0.013 0.012 0.011 0.010 0.009 0.008 0 50 100 150 200 250 Epoch\nFigure 9: MNIST Autoencoding Results for a learning rate of 10-5. Each curve is a median of three runs. Light, thin curves correspond to test set log losses. Note that reconstruction errors are highe than models trained with 10-3 or 10-4 learning rates.\nLayer Type # channels x, y dimensior. raw RGB input 3 32 ZCA whitening. 3 32 Gaussian noise = 0.15 3 32 3 3 conv with activation. 96 32 3 3 conv with activation. 96 32 3 3 conv with activation. 96 32 2 2 max pool, stride 2 96 16 dropout with p = 0.5. 96 16 3 3 cony with activation 192 16 3 3 conv with activation. 192 16 3 3 conv with activation. 192 16 2 2 max pool, stride 2 192 8 dropout with p = 0.5 192 8 3 3 cony with activation 192 6 1 1 cony with activation 192 6 1 1 cony with activation 192 6 global average pool 192 1 softmax output 10 11\nTable 1: Neural network architecture for CIFAR-10"}] |
rksfwnFxl | [{"section_index": "0", "section_name": "LSTM-BASED SYSTEM-CALL LANGUAGE MODELING AND ROBUST ENSEMBLE METHOD FOR DESIGNING HOST-BASED INTRUSION DETECTION SYSTEMS", "section_text": "Gyuwan Kim, Hayoon Yi, Jangho Lee, Yunheun Paek. Sungroh Yoon.\nkgwmath, hyyi, ubuntu, ypaek, sryoon}@snu.ac.kr\nIn computer security, designing a robust intrusion detection system is one of the most fundamental and important problems. In this paper, we propose a system-call language-modeling approach for designing anomaly-based host intrusion detec- tion systems. To remedy the issue of high false-alarm rates commonly arising in conventional methods, we employ a novel ensemble method that blends multiple thresholding classifiers into a single one, making it possible to accumulate highly normal' sequences. The proposed system-call language model has various advan- tages leveraged by the fact that it can learn the semantic meaning and interactions of each system call that existing methods cannot effectively consider. Through diverse experiments on public benchmark datasets, we demonstrate the validity and effectiveness of the proposed method. Moreover, we show that our model possesses high portability, which is one of the key aspects of realizing successful intrusion detection systems."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "An intrusion detection system (IDS) refers to a hardware/software platform for monitoring network or system activities to detect malicious signs therefrom. Nowadays, practically all existing computer systems operate in a networked environment, which continuously makes them vulnerable to a variety of malicious activities. Over the years, the number of intrusion events is significantly increasing across the world, and intrusion detection systems have already become one of the most critical components in computer security. With the explosive growth of logging data, the role of machine learning in effective discrimination between malicious and benign system activities has never been more important."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "A survey of existing IDS approaches needs a multidimensional consideration. Depending on the scope of intrusion monitoring, there exist two main types of intrusion detection systems: network- based (NIDS) and host-based (HIDS). The network-based intrusion detection systems monitor com- munications between hosts, while the host-based intrusion detection systems monitor the activity on a single system. From a methodological point of view, intrusion detection systems can also be clas sified into two classes (Jyothsna et al.2011): signature-based and anomaly-based. The signature- based approaches match the observed behaviors against templates of known attack patterns, while the anomaly-based techniques compare the observed behaviors against an extensive baseline of nor- mal behaviors constructed from prior knowledge, declaring each of anomalous activities to be an attack. The signature-based methods detect already known and learned attack patterns wel1 but have an innate difficulty in detecting unfamiliar attack patterns. On the other hand, the anomaly-based methods can potentially detect previously unseen attacks but may suffer from making a robust base- line of normal behavior, often yielding high false alarm rates. The ability to detect a 'zero-day attack (i.e., vulnerability unknown to system developers) in a robust manner is becoming an impor- tant requirement of an anomaly-based approach. In terms of this two-dimensional taxonomy, we can classify our proposed method as an anomaly-based host intrusion detection system.\nIt was Forrest et al.(1996) who first started to use system-call traces as the raw data for host based anomaly intrusion detection systems, and system-call traces have been widely used for ID. research and development since their seminal work (Forrest et al.]2008). Recently, Creech & Hi (2014) proposed to use neural networks on top of a sequence of system calls in the context o HIDS. System calls represent low-level interactions between programs and the kernel in the system and many researchers consider system-call traces as the most accurate source useful for detecting intrusion in an anomaly-based HIDs. From a data acquisition point of view, system-call traces are easy to collect in a large quantity in real-time. Our approach described in this paper also utilize system-call traces as input data.\nFor nearly two decades, various research has been conducted based on analyzing system-call traces Most of the existing anomaly-based host intrusion detection methods typically aim to identify mean ingful features using the frequency of individual calls and/or windowed patterns of calls from se quences of system calls. However, such methods have limited ability to capture call-level features and phrase-level features simultaneously. As will be detailed shortly, our approach tries to address this limitation by generating a language model of system calls that can jointly learn the semantics of individual system calls and their interactions (that can collectively represent a new meaning appearing in call sequences.\nIn natural language processing (NLP), a language model represents a probability distribution over. sequences of words, and language modeling has been a very important component of many NLP. applications, including machine translation (Cho et al.]2014) Bahdanau et al.][2014), speech recog nition (Graves et al.2013), question answering (Hermann et al.|2015), and summarization (Rush et al.]2015). Recently, deep recurrent neural network (RNN)-based language models are show-. ing remarkable performance in various tasks (Zaremba et al.]2014) Jozefowicz et al.]2016). It is expected that such neural language models will be applicable to not only NLP applications but also signal processing, bioinformatics, economic forecasting, and other tasks that require effective. temporal modeling.\nMotivated by this performance advantage and versatility of deep RNN-based language modeling. we propose an application of neural language modeling to host-based introduction detection. We consider system-call sequences as a language used for communication between users (or programs and the system. In this view, system calls and system-call sequences correspond to words and sentences in natural languages, respectively. Based on this system-call language model, we car. perform various tasks that comprise our algorithm to detect anomalous system-call sequences: e.g.. estimation of the relative likelihood of different words (i.e., system calls) and phrases (i.e., a windov of system calls) in different contexts.\nThe idea of using artificial neural networks for IDSs has been popular (Debar et al.]. 1992 R yar et al.1998, Mukkamala et al. 2002 Wang et al.]2010f Creech & Hu2014).For more recen1 deep learning-based techniques, there exists an example that utilized LSTM for improving intrusior. detection performance (Staudemeyer & Omlin2013] Staudemeyer2015). However, the work by Staudemeyer & Omlin (2013); Staudemeyer(2015) was in essence a feature-based supervised clas sifier (rather than an anomaly detector) requiring heavy annotation efforts to create labels. As such. their work required explicitly labeled attack data and possessed an inherent limitation that it coulc. not detect new types of attacks. In addition, their approach was not an end-to-end framework anc. needed careful feature engineering to extract salient features for the classification task. Only one. binary label was given per sequence to train their model, unlike our proposed method that is trainec. to predict the next call, effectively capturing contextual information needed for classification.\nOur specific contributions can be summarized as follows: First, to model sequences of system calls. we propose a neural language modeling technique that utilizes long short-term memory (LSTM) (Hochreiter & Schmidhuber1997) units for enhanced long-range dependence learning. The present work is one of the first end-to-end frameworks to model system-call sequences as a natural lan- guage for effectively detecting anomalous patterns therefrom. Second, to reduce false-alarm rates. of anomaly-based intrusion detection, we propose a leaky rectified linear units (ReLU) (Maas et al.. 2013) based ensemble method that constructs an integrative classifier using multiple (relatively weak) thresholding classifiers. Each of the component classifiers is trained to detect different types of 'highly normal' sequences (i.e., system call sequences with very high probability of being nor-. mal), and our ensemble method blends them to produce a robust classifier that delivers significantly lower false-alarm rates than other commonly used ensemble methods. As shown in Figure[1] these\ntwo aspects of our contributions can seamlessly be combined into a single framework. Note that the ensemble method we propose is not limited to our language-model based front-end but also applicable to other types of front-ends.\nIn the rest of this paper, we will explain more details of our approach and then present our experi mental results that demonstrate the effectiveness of our proposed method.\nFigure[1|shows the overview of our proposed approach to designing an intrusion detection system Our method consists of two parts: the front-end is for language modeling of system calls in various settings, and the back-end is for anomaly prediction based on an ensemble of thresholding classifiers derived from the front-end. In this section, we describe details of each component in our pipeline."}, {"section_index": "3", "section_name": "2.1 LANGUAGE MODELING OF SYSTEM CALLS", "section_text": "Figure 2|illustrates the architecture of our system-call language model. The system call language model estimates the probability distribution of the next call in a sequence given the sequence of previous calls. We assume that the host system generates a finite number of system calls. We index each system call by using an integer starting from 1 and denote the fixed set of all possible system calls in the system as S = {1, .:. , K}. Let x = x1x2 ..: xt(x; E S) denote a sequence of l system calls.\nAt the input layer, the call at each time step x, is fed into the model in the form of one-hot encoding. in other words, a K dimensional vector with all elements zero except position x. At the embed. ding layer, incoming calls are embedded to continuous space by multiplying embedding matrix W,. which should be learned. At the hidden layer, the LSTM unit has an internal state, and this state. is updated recurrently at each time step. At the output layer, a softmax activation function is used. to produce the estimation of normalized probability values of possible calls coming next in the se quence, P(x;[x1:i-1). According to the chain rule, we can estimate the sequence probability by the. following formula:\nGiven normal training system call sequence data, we can train this LSTM-based system call lan. guage model using the back-propagation through time (BPTT) algorithm. The training criterior minimizes the cross-entropy loss, which is equivalent to maximizing the likelihood of the system call sequence. A standard RNN often suffers from the vanishing/exploding gradient problem, and. when training with BPTT, gradient values tend to blow up or vanish exponentially. This makes it. difficult to learn long-term dependency in RNNs (Bengio et al.I|1994). LSTM, a well-designed RNN architecture component, is equipped with an explicit memory cell and tends to be more effective to. cope with this problem, resulting in numerous successes in recent RNN applications..\nsystem call thresholding language model classifier Cf1 LM1 system call thresholding normal ensemble normal training data language model classifier Cf2 classifier C or LM2 abnormal ... : system call thresholding language model classifier Cfm LMm\n1 11 P(x) = P(xi|x1:i-1) i=1\nFigure 2: System-call language model.\nMarkov chains and hidden Markov models are widely used probabilistic models that can estimate. the probability of the next call given a sequence of previous calls. There has been previous work on using Markov models in intrusion detection systems (Hofmeyr et al.]1998] Hoang et al.]2003] Hu et al.]2009, [Yolacan et al.]2014). However, these methods have an inherent limitation in that the. probability of the next call is decided by only a finite number of previous calls. Moreover, LSTM. can model exponentially more complex functions than Markov models by using continuous space. representations. This property alleviates the data sparsity issue that occurs when a large number of previous states are used in Markov models. In short, the advantages of LSTM models compared to. Markov models are two folds: the ability to capture long-term dependency and enhanced expressive. power.\nCommonly, IDS is evaluated by the ROC curve rather than a single point corresponding to a specifi. threshold on the curve. Sensitivity to the threshold is shown on the curve. The x-axis of the curve represents false alarm rates, and the y-axis of the curve represents detection rates|'[If the thresholc. is too low, the IDS is able to detect attacks well, but users would be annoyed due to false alarms Conversely, if the threshold is too high, false alarm rates becomes lower, but it is easy for IDS tc. miss attacks. ROC curves closer to (0, 1) means a better classifier (i.e., a better intrusion detectior system). The area under curve (AUC) summarizes the ROC curve into a single value in the range. [0, 1] (Bradley1997).\nBuilding a 'strong normal' model (a model representing system-call sequences with high probabil ities of being normal) is challenging because of over-fitting issues. In other words, a lower training loss does not necessarily imply better generalization performance. We can consider two reasons fo. encountering this issue.\nFirst, it is possible that only normal data were used for training the IDS without any attack data Learning discriminative features that can separate normal call sequences from abnormal sequences is thus hard without seeing any abnormal sequences beforehand. This is a common obstacle for\n1 A false alarm rate is the ratio of validation normal data classified as abnormal. A detection rate is the ratic of detected attacks in the real attack data.\nP(x1) P(x2|x1) P(x3|x1:2) P(xn|x1:n-1) output layer hidden layer. embedding layer [GO] X1 x2 Xn-1 Xn input layer 0 010...0 fork setgid ioctl close (a) language model architecture. (b) estimation of sequence probability\nBecause typical processes in the system execute a long chain of system calls, the number of system calls required to fully understand the meaning of a system-call sequence is quite large. In addi tion, the system calls comprising a process are intertwined with each other in a complicated way The boundaries between system-call sequences are also vague. In this regard, learning long-term dependence is crucial for devising effective intrusion detection systems.\nGiven a new query system-call sequence, on the assumption that abnormal call patterns deviate from learned normal patterns, yielding significantly lower probabilities than those of normal call patterns, a sequence with an average negative log-likelihood above a threshold is classified as abnormal, while a sequence with an average negative log-likelihood below the threshold is classified as normal. By changing the threshold value, we can draw a receiver operating characteristic (ROC) curve, which is the most widely used measure to evaluate intrusion detection systems.\nalmost every anomaly detection problem. In particular, malicious behaviors are frequently hidde and account for only a small part of all the system call sequences..\nSecond, in theory, we need a huge amount of data to cover all possible normal patterns to train th model satisfactorily. However, doing so is often impossible in a realistic situation because of the diverse and dynamic nature of system call patterns. Gathering live system-call data is harder thar generating synthetic system-call data. The generation of normal training data in an off-line setting can create artifacts, because these data are made in fixed conditions for the sake of convenience il data generation. This setting may cause normal patterns to have some bias.\nnormal forf(x) 0; Cf(x;0) abnormal Otherwise.\nMost of the intrusion detection algorithms, including our proposed method, employ a thresholding classifier. For the sake of explanation, we define a term highly normal' sequence for the classifier Cf as a system call sequence having an extremely low f value so it will be classified as normal even. when the threshold 0 is sufficiently low to discriminate true abnormals. Highly normal sequences are represented as a flat horizontal line near (1, 1) in the ROC curve. The more the classifier finds highly. normal sequences, the longer this line is. Note that a highly normal sequence is closely related to the false alarm rate.\nOur goal is to minimize the false alarm rate through the composition of multiple classifiers Cf1, Cf2,..., Cfm into a single classifier C+, resulting in accumulated 'highly normal' data (here m is the number of classifiers used in the ensemble). This is due to the fact that a low false alarm rate is an important requisite in computer security, especially in intrusion detection systems. Our ensemble method can be represented by a simple formula:\nm f(x)= Wi(fi(x)-bi) i=1\nAs activation function , we used a leaky ReLU function, namely o(x) = max(x, O.001x). In tuitively, the activation function forces potential highly normal' sequences having f values lowe than b; to keep their low f values to the final f value. If we use the regular ReLU function instead the degree of 'highly normal' sequences could not be differentiated. We set the bias term b, to the. median of f values of the normal training data. In (3), w; indicates the importance of each classifie f. Because we do not know the performance of each classifier before evaluation, we set w, to 1/m Mathematically, this appears to be a degenerated version of a one-layer neural network. The basic. philosophy of the ensemble method is that when the classification results from various classifiers. are slightly different, we can make a better decision by composing them well. Still, including bac classifiers could degrade the overall performance. By choosing classifiers carefully, we can achieve satisfactory results in practice, as will be shown in Section 3.2."}, {"section_index": "4", "section_name": "2.3 BASELINE CLASSIFIERS", "section_text": "Deep neural networks are an excellent representation learning method. We exploit the sequence. representation learned from the final state vector of the LSTM layer after feeding all the sequences. of calls. For comparison with our main classifier, we use two baseline classifiers that are commonly used for anomaly detection exploiting vectors corresponding to each sequence: k-nearest neigh. bor (kNN) and k-means clustering (kMC). Examples of previous work for mapping sequences into. vectors of fixed-dimensional hand-crafted features include normalized frequency and tf-idf (Liao &. Vemuri]2002]Xie et al.]2014).\nLet T be a normal training set, and let lstm(x) denotes a learned representation of call sequence x from the LSTM layer. kNN classifiers search for k nearest neighbors in T of query sequence x\nAll these situations make it more difficult to choose a good set of hyper-parameters for LSTM architecture. To cope with this challenge, we propose a new ensemble method. Due to the lack. of data, different models with different parameters capture slightly different normal patterns. If. function f E S* +> R, which maps a system call sequence to a real value, is given, we can define a thresholding classifier as follows:\non the embedded space and measure the minimum radius to cover them all. The minimum radius g(x; k) is used to classify query sequence x. Alternatively, we can count the number of vectors within the fixed radius, g(x; r). In this paper, we used the former. Because the computational cost of a kNN classifier is proportional to the size of T, using a kNN classifier would be intolerable when the normal training dataset becomes larger.\ng(x;k) = min r s.t. d(lstm(x), lstm(y)) r > k yET 1 q(x;r =1 ) d(lstm(x), lstm(y)) r T yET\nThe kMC algorithm partitions T on the new vector space into k clusters G1, G2,..., Gk in which. each vector belongs to the cluster with the nearest mean so as to minimize the within-cluster sum of squares. They are computed by Lloyd's algorithm and converge quickly to a local optimum. The minimum distance from each center of clusters i, h(x; k), is used to classify the new query. sequence."}, {"section_index": "5", "section_name": "3.1 DATASETS", "section_text": "Though system call traces themselves might be easy to acquire, collecting or generating a sufficient. amount of meaningful traces for the evaluation of intrusion detection systems is a nontrivial task. In order to aid researchers in this regard, the following datasets were made publicly available from prior work: ADFA-LD (Creech & Hu2013), KDD98 (Lippmann et al.|2000) and UNM (of New Mexico 2012). The KDD98 and UNM datasets were released in 1998 and 2004, respectively. Although these two received continued criticism about their applicability to modern systems (Brown et al.]2009 McHugh]200of Tan & Maxion2003), we include them as the results would show how our model fares against early works in the field, which were mostly evaluated on these datasets. As the ADFA-. LD dataset was generated around 2012 to reflect contemporary systems and attacks, we have done. our evaluation mainly on this dataset..\nThe ADFA-LD dataset was captured on an x86 machine running Ubuntu 11.04 and consists of three groups: normal training traces, normal validation traces, and attack traces. The KDD98 dataset was audited on a Solaris 2.5.1 server. We processed the audit data into system call traces per session. Each session trace was marked as normal or attack depending on the information provided in the accompanied bsm. 1ist file, which is available alongside the dataset. Among the UNM process set, we tested our model with 1pr that was collected from SunOS 4.1.4 machines. We merged the live 1pr set and the synthetic 1pr set. This combined dataset is further categorized into two groups: normal traces and attack traces. To maintain consistency with ADFA-LD, we divided the normal data of KDD98 and UNM into training and validation data in a ratio of 1:5, which is the ratio of the ADFA-LD dataset. The numbers of system-call sequences in each dataset we used are summarized in Table1"}, {"section_index": "6", "section_name": "3.2 PERFORMANCE EVALUATION", "section_text": "We used ADFA-LD and built three independent system-call language models by changing the hyper parameters of the LSTM layer: (1) one layer with 200 cells, (2) one layer with 400 cells, and (3) two layers with 400 cells. We matched the number of cells and the dimension of the embedding\nh(x;k) = min 1. d(lstm(x), i i=1,...,k\nThe two classifiers Cg and Cn are closely related in that the kMC classifier is equivalent to the. 1-nearest neighbor classifier on the set of centers. In both cases of kNN and kMC, we need to choose parameter k empirically, depending on the distribution of vectors. In addition, we need to choose a distance metric on the embedding space; we used the Euclidean distance measure in our experiments.\nFigure 3: ROC curves from the ADFA-LD. Left shows the result from our three system-call language models with different parameters and two baseline classifiers. Right illustrates the results fron different ensemble methods\nvector. Our parameters were uniformly initialized in [-0.1, 0.1]. For computational efficiency, we adjusted all system-call sequences in a mini-batch to be of similar lengths. We used the Adar optimizer (Kingma & Ba2014) for stochastic gradient descent with a learning rate of 0.0001. The normalized gradient was rescaled whenever its norm exceeded 5 (Pascanu et al.|2013), and we used dropout (Srivastava et al.]2014) with probability 0.5. We show the ROC curves obtained from the experiment in Figure3\nFor the two baseline classifiers, we used the Euclidean distance measure. Changing the distance measure to another metric did not perform well on average. In case of kNN, using k = 11 achieved the best performance empirically. For kMC, using k = 1 gave the best performance. Increasing the value of k produced similar but poorer results. We speculate the reason why a single cluster suffices as follows: learned representation vectors of normal training sequence are symmetrically distributed.. The kNN classifier Cg and the kMC classifier Cn achieved similar performance. Compared to. Liao & Vemuri(2002); Xie et al.(2014), our baseline classifiers easily returned 'highly normal'. calls. This result was leveraged by the better representation obtained from the proposed system-call. language modeling.\nAs shown in the left plot of Figure [3] three LSTM classifiers performed better than Cg and C. We assume that the three LSTM classifiers we trained are strong enough by themselves, and thei. classification results would be different from each other. By applying ensemble methods, we woul expect to improve the performance. The first one was averaging, the second one was voting, an. lastly we used our ensemble method as we explained in Section[2.2] The proposed ensemble metho gave a better AUC value (0.928) with a large margin than that of the averaging ensemble metho. (0.890) and the voting ensemble method (0.859). Moreover, the curve obtained from the propose. ensemble method was placed above individual single curves, while other ensemble methods did nc show this property.\nTable 1: Summary of datasets used for experiments\nNormal Attack Benchmark # training # validation # type # attack ADFA-LD 833 4372 6 746 KDD98 1364 5459 10 41 UNM-lpr 627 3136 1 2002 1 1 0.9 0.9 0.8 0.8 0.7 0.7 Rate 0.6 Rate 0.6 beeeeon 0.5 Deeeeeon 0.5 0.4 0.4 kNN 0.3 - kMC 0.3 LSTM-200 Ensemble-averaging 0.2 0.2 LSTM-400 Ensemble-voting 0.1 - LSTM-400*2 0.1 Ensemble-proposed 0 0 0 0.2 0.4 0.6 0.8 0 0.2 0.4 0.6 0.8 1 False Alarm Rate False Alarm Rate\nIn the setting of anomaly detection where attack data are unavailable, learning ensemble parame ters is infeasible. If we exploit partial attack data, the assumption breaks down and the zero-day attack issue remains. Our ensemble method is appealing in that it performs remarkably well without learning.\nTo be clear, we applied ensemble methods to three LSTM classifiers learned independently using different hyper-parameters, not with the baseline classifiers, Cg or Ch. Applying ensemble methods to each type of baseline classifier gave unsatisfactory results since changing parameters or initial- ization did not result in complementary and reasonable classifiers that were essential for ensemble methods. Alternatively, we could do ensemble our LSTM classifiers and baseline classifiers to gether. However, this would also be a wrong idea because their f values differ in scale. The value o. f in our LSTM classifier is an average negative log-likelihood, whereas g and h indicate distances in a continuous space.\nAccording to Creech & Hu (2014), the extreme learning machine (ELM) model, sequence time. delay embedding (STIDE), and the hidden Markov model (HMM) (Forrest et al.]1996) Warrender et al.]1999) achieved about 13%, 23%, and 42% false alarm rates (FAR) for 90% detection rate (DR), respectively. We achieved 16% FAR for 90% DR, which is comparable to the result of ELM. and outperforms those of STIDE and HMM. The ROC curves for ELM, HMM, and STIDE can be found, but we could not draw those curves on the same plot with ours because the authors providec no specific details of their results. Creech & Hu (2014) classified ELM as a semantic approach anc. other two as syntactic approaches which treat each call as a basic unit. To be fair, our proposec. method should be compared with those approaches that use system calls only as a basic unit in tha. we watch the sequence call-by-call. Furthermore, our method is end-to-end while ELM relies or. hand-crafted features.\nIn|Creech & Hu (2014), the authors reported that there was significant overhead for training the mod els mentioned above, and the overhead would inevitably increase for handling larger data. Longe. phrases tend to be more informative, but handling them typically requires larger dictionaries. For. this reason, Creech & Hu (2014) had to put an empirical upper bound to limit the lengths of phrases which then might lower the performance of the models to handle various attacks. By contrast, our. approach can learn in continuous space semantically meaningful representations of calls, phrases. and sequences of arbitrary lengths. Moreover, our method can relieve the burden of preprocessing. (potentially massive) logging data. We expect that incorporating prior knowledge into our model. can further boost its performance.."}, {"section_index": "7", "section_name": "3.3 PORTABILITY EVALUATION", "section_text": "We carried out experiments similar to those presented in Section|3.2 using the KDD98 dataset and the UNM dataset. First, we trained our system-call language model with LSTM having one layer of 200 cells and built our classifier using the normal training traces of the KDD98 dataset. The same model was used to evaluate the UNM dataset to examine the portability of the LSTM models trained with data from a different but similar system. The results of our experiments are represented in Figure4 For comparison, we display the ROC curve of the UNM dataset by using the model from training the normal traces therein. To examine portability, the system calls in test datasets need to be included or matched to those of training datasets. UNM was generated using an earlier version of OS than that of KDD98, but ADFA-LD was audited on a fairly different OS. This made our experiments with other combinations difficult.\nThrough a quantitative analysis, for the KDD98 dataset, we earned an almost perfect ROC curve with an AUC value of 0.994 and achieved 2.3% FAR for 100% DR. With the same model, we testec the UNM datset and obtained a ROC curve with an AUC value of 0.969 and 5.5% FAR for 99.8% DR. This result was close to the result earned by using the model trained on normal training traces of the UNM dataset itself, as shown in the right plot of Figure4\nThis result is intriguing because it indicates that system-call language models have a strong portabil- ity. In other words, after training one robust and extensive model, the model can then be deployed tc other similar host systems. By doing so, we can mitigate the burden of training cost. This paradign is closely related to the concept of transfer learning, or zero-shot learning. It is well known that neural networks can learn abstract features and that they can be used successfully for unseen data.\n0.9 0.9 0.8 0.8 0.7 0.7 Rate 0.6 Rate 0.6 teetoon 0.5 teeoon 0.5 0.4 0.4 Det 0.3 0.3 0.2 0.2 UNM 0.1 KDD98 0.1 from KDD98 0 o L 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 False Alarm Rate False Alarm Rate\nFigure 4: ROC curves from the KDD dataset and UNM dataset. Left is the evaluation about the KDD dataset. Right is the evaluation about UNM dataset using the model trained with the KDD98 dataset and the UNM dataset.\nexpected to see a similar characteristic with the proposed system-call language model. The 2D. projection of the calls using the embedding matrix W learned from the system-call language model. was done by t-SNE (Van der Maaten & Hinton2008) and shown in Figure 5] Just as the natural. language model, we can expect that calls having similar co-occurrence patterns are positioned in similar locations in the embedded space after training the system call language model. We can clearly see that calls having alike functionality are clustered with each other..\nThe first obvious cluster would be the read-write call pair and the open-close pair. The calls of each pair were located in close proximity in the space, meaning that our model learned to associate them together. At the same time, the difference between the calls of each pair appears to be almost the same in the space, which in turn would mean our model learned that the relationship of each pair somewhat resembles.\nAnother notable cluster would be the group of select, pselect6, ppoll, epoll_wait and nanosleep. The calls select, pselect6 and ppoll all have nearly identical functions in that they wait for some. file descriptors to become ready for some class of I/O operation or for signals. The other two calls. also have similar characteristics in that they wait for a certain event or signal as well. This could be interpreted as our model learning that these 'waiting' calls share similar characteristics..\nOther interesting groups would be: readlink and lstat64 which are calls related to symbolic links fstatat64 and fstat64 which are calls related to stat calls using file descriptors; pipe and pipe2 which are nearly identical and appear almost as one on the embedding layer. These cases show that our model is capable of learning similar characteristics among the great many system calls..\nSimilarly to the call representations, we expected that attack sequences with the same type would. cluster to each other, and we tried to visualize them. However, for various reasons including the. lack of data, we were not able to observe this phenomenon. Taking the fact that detecting abnormal. patterns from normal patterns well would be sufficiently hard into consideration, learning repre. sentation to separate different abnormal patterns with only seen normal patterns would also be ar extremely difficult task."}, {"section_index": "8", "section_name": "4 CONCLUSION", "section_text": "Our main contributions for designing intrusion detection systems as described in this paper have twc. parts: the introduction of a system-call language modeling approach and a new ensemble method.. To the best of the authors' knowledge, our method is the first to introduce the concept of a language model, especially using LSTM, to anomaly-based IDS. The system-call language model can capture. the semantic meaning of each call and its relation to other system calls. Moreover, we proposed an. innovative and simple ensemble method that can better fit to IDS design by focusing on lowering. false alarm rates. We showed its outstanding performance by comparing it with existing state-of-the-. art methods and demonstrated its robustness and generality by experiments on diverse benchmarks.\nSlypci -3 30 rt_sigaction -3.5 aejesgrd -4 fopjigprocmask geippid wait4 settl6Ait 20 epoll_wait Msadink -4.5 getdents64 Isepeid ppoll nanosleep reboot SAttergbust list pselect6 .setguohps adjtimex getpriosereujd -5 .timkdisehatr sonketsathedragrtay fstatat64 implemented yhmedseettime listxattr yktI fork dlaarehear prgetia eAt umour Ssetmaskot implemented -5.5 utime set tid.adgnesoups schedyistek Setgesgisasuid. select hmodat pages noPn iented clock_gett fstatfQve ernppiieyi -6 pensat fget -20 -18 -16 -14 -12 -10 ftgetreden fdatasxne Josetiacnseek ssbaddggqrata teutacl (b) 0 ipo pwait writeyettimeofdgetrlimit IRIRA d.. ngt_implemented sigpentmerfd_get Bgal PERS Jst 30 eohimptrery ryslamentesecve notmy sagsyieffy setrasgichs renentndilssoct! 25 -10 loqatkup dcookie seprtheead_area requestyumdsymirk pippipe2 setsire setigd4: brk readMykatttanprmf atdictwagid16 20 pread6Aot intpleae readetpgjttresuid peieethiviPte eimplemanted reboot -20 adjtinex getpgid rmdir mmap gstdents not implemented open fadvigemameamunmaupe msyroteat mq_time write unlink sync file fork getresgid setmusk lnkess capset close ot_implementeecvmmsgoivot_roote sp waittphpdule -30 setgroups16 io_destroy timer_ 30 -20 -10 0 10 20 30 itimer 9.5 10 195ck 11 11.5 12 (a) (c)\nFigure 5: 2-D embedding of learned call representations. (a) shows the full representation space ol system calls that appeared in training data. (b) and (c) show the zoomed-in view of specific regions.\nAs part of our future work, we are planning to tackle the task of detecting elaborate contempo-. rary attacks including mimicry attacks (Wagner & Soto2002) Shu et al.2015) by more advanced methods. Our proposed method allows us to estimate the likelihood of arbitrary sections in a given. system-call sequence, which may be helpful for analyzing the capability of handling mimicry at tacks. For instance, it is possible to determine if there exists a sufficiently long section (rather than. the whole sequence) with the average log-likelihood below the threshold. In addition, we are consid-. ering designing a new framework to build a robust model in on-line settings by collecting large-scale. data generated from distributed environments. For optimization of the present work, we would be. able to alter the structure of RNNs used in our system-call language model and ensemble algorithm.. Finally, we anticipate that a hybrid method that combines signature-based approaches and feature. engineering will allow us to create more accurate intrusion detection systems.."}, {"section_index": "9", "section_name": "ACKNOWLEDGMENTS", "section_text": "This work was supported by BK21 Plus Project in 2016 (Electrical and Computer Engineering Seoul National University)."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Slypc -3 30 rt_sigaction -3.5 aejespr -4 iopjigprocmask 20 getippid wait4 settlbAit epoll_wait Msadink 4.5 getdents64 Igeeyid ppoll nanosleep reboot uplinsat setregietosrelntlstat64 SAttegbust list pselect6 setgrauns adjtimex -5 getpriosereaid fstatat64 timkdigatrat sonkrtsathedranalyn mnlemented chow ytmedseettimed listxattr eitadd. 16stat64poll_ape ch RCtI fork dlaapeaear ssetmaskot_implemented -5.5 pryettid utime exit_grougetgid set.tidradgneroups sched Jistek select nmod hippnaxiented clock_getty 4ogtetetjimerfd_create -6 aensatmadvise fgetx mine8rehroold -20 -18 -16 -14 -12 -10 ftgetredein fdatasxrfecee teek ssbaddggtgt! eaci (b) 0 pc #pwait writegettimeofdgntrlimit sesigo ngt._implemented sigpenttnerfd_ne Pemask fchdir 30 eab.ime$rarprifslamentesecve notim? setraseidde Netkinennfilesoctl 25 -10 jopeshed Hloqatkup_dcookie pippipe2 setsire aeprdteead_area request ena23ymnk setigalrangeetoyesetinerfdge brk readykattfanpr 20 pread6Aot inplei readetpgjetresuid peieofiip'implemanted reboot -20 adjtinex rmdir getpgid mmap getdents not_implemented open msyroreat mq_time write fadvisemameamunmape unlink fork getresgid setmtsk linkess capset close ot_implementeacvmmsgoivot_roote wattphodule sp 30 setgroups16 _destroy timer_ -30 -20 -10 0 10 20 30 9.5 10 19.5ck itimer 11 11.5 12 (a) (c)\nAs discussed earlier, the proposed method also has excellent portability. In contrast to alternative methods, our proposed method incurs significant smaller training overhead because it does not need to build databases or dictionaries to keep a potentially exponential amount of patterns. Our method. is compact and light in that the size of the space required to save parameters is small. The overall training and inference processes are also efficient and fast, as our methods can be implemented using. efficient sequential matrix multiplications..\nDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.\nYoshua Bengio, Rejean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilistic language model. In Journal of Machine Learning Research, 2003.\nAndrew P Bradley. The use of the area under the roc curve in the evaluation of machine learning algorithms. Pattern recognition, 30(7):1145-1159, 1997.\nKyunghyun Cho, Bart Van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Hol ger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.\nXuan Dau Hoang, Jiankun Hu, and Peter Bertok. A multi-layer model for anomaly intrusion detec tion using program sequences of system calls. In Proc. 11th IEEE Intl. Conf. Citeseer, 2003.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8) 1735-1780, 1997.\nSteven A Hofmeyr, Stephanie Forrest, and Anil Somayaji. Intrusion detection using sequences o system calls. Journal of computer security, 6(3):151-180, 1998\nJiankun Hu, Xinghuo Yu, Dong Qiu, and Hsiao-Hwa Chen. A simple and efficient hidden markov model scheme for host-based anomaly intrusion detection. Network, 1EEE, 23(1):42-47, 2009.\nRafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring th limits of language modeling. arXiv preprint arXiv:1602.02410. 2016\nKarl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pp. 1684-1692, 2015.\nAndrew L Maas, Awni Y Hannun, and Andrew Y Ng. Rectifier nonlinearities improve neural net work acoustic models. In Proc. ICML, volume 30, 2013.\nJohn McHugh. Testing intrusion detection systems: a critique of the 1998 and 1999 darpa intrusion. detection system evaluations as performed by lincoln laboratory. ACM transactions on Informa tion and system Security, 3(4):262-294, 2000.\nTomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. Linguistic regularities in continuous space worc representations. In NAACL-HLT, pp. 746-751, 2013\nRazvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neura networks. In Proceedings of The 3Oth International Conference on Machine Learning, pp. 1310 1318, 2013.\nNitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine. Learning Research, 15(1):1929-1958, 2014.\nRalf C Staudemeyer. Applying long short-term memory recurrent neural networks to intrusion detection. South African Computer Journal, 56(1):136-154, 2015\nKymie Tan and Roy A Maxion. Determining the operational limits of an anomaly-based intrusior detector. Selected Areas in Communications, IEEE Journal on, 21(1):96-110. 2003\nLaurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 9(2579-2605):85, 2008.\nDavid Wagner and Paolo Soto. Mimicry attacks on host-based intrusion detection systems. In Proceedings of the 9th ACM Conference on Computer and Communications Security, pp. 255 264. ACM, 2002.\nYihua Liao and V Rao Vemuri. Using text categorization techniques for intrusion detection. Ir USENIX Security Symposium, volume 12, pp. 51-59, 2002.\nRichard P Lippmann, David J Fried, Isaac Graf, Joshua W Haines, Kristopher R Kendall, David McClung, Dan Weber, Seth E Webster, Dan Wyschogrod, Robert K Cunningham, et al. Evaluating intrusion detection systems: The 1998 darpa off-line intrusion detection evaluation. In DARPA Information Survivability Conference and Exposition, 200o. DISCEX'00. Proceedings, volume 2 pp. 12-26. IEEE, 2000.\nSrinivas Mukkamala, Guadalupe Janoski, and Andrew Sung. Intrusion detection using neural net works and support vector machines. In Neural Networks, 2002. IJCNN'02. Proceedings of the 2002 International Joint Conference on, volume 2. pp. 1702-1707. IEEE. 2002\nAlexander M Rush, Sumit Chopra, and Jason Weston. A neural attention model for abstractive sentence summarization. arXiv preprint arXiv:1509.00685, 2015\nake Ryan, Meng-Jang Lin, and Risto Miikkulainen. Intrusion detection with neural networks pp 943-949.1998 Advances in neural intormation nrocessing\nXiaokui Shu, Danfeng Yao, and Naren Ramakrishnan. Unearthing stealthy program attacks buried in extremely long execution paths. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 401-413. ACM. 2015.\nEsra N Yolacan, Jennifer G Dy, and David R Kaeli. System call anomaly detection using multi. hmms. In Software Security and Reliability-Companion (SERE-C), 2014 IEEE Eighth Interna tional Conference on, pp. 25-30. IEEE, 2014.\nWojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization arXiv preprint arXiv:1409.2329, 2014\nGang Wang, Jinxing Hao, Jian Ma, and Lihua Huang. A new approach to intrusion detection using. artificial neural networks and fuzzy clustering. Expert Systems with Applications, 37(9):6225- 6232, 2010. Christina Warrender, Stephanie Forrest, and Barak Pearlmutter. Detecting intrusions using system calls: Alternative data models. In Security and Privacy, 1999. Proceedings of the 1999 IEEE. Symposium on, pp. 133-145. IEEE, 1999. Miao Xie, Jiankun Hu, Xinghuo Yu, and Elizabeth Chang. Evaluating host-based anomaly detec. tion systems: Application of the frequency-based algorithms to adfa-ld. In Network and System. Security, pp. 542-549. Springer, 2014."}] |
Syoiqwcxx | [{"section_index": "0", "section_name": "LOCAL MINIMA IN TRAINING OF DEEP NETWORKS", "section_text": "Grzegorz Swirszcz, Wojciech Marian Czarnecki & Razvan Pascanu DeepMind\nswirszcz,lejlot, razp}@google.com\nThere has been a lot of recent interest in trying to characterize the error surface of deep models. This stems from a long standing question. Given that deep networks are highly nonlinear systems optimized by local gradient methods, why do they not seem to be affected by bad local minima? It is widely believed that training of deep models using gradient methods works so well because the error surface either has no local minima, or if they exist they need to be close in value to the global minimum. It is known that such results hold under very strong assump- tions which are not satisfied by real models. In this paper we present examples showing that for such theorem to be true additional assumptions on the data, ini- tialization schemes and/or the model classes have to be made. We look at the particular case of finite size datasets. We demonstrate that in this scenario one can construct counter-examples (datasets or initialization schemes) when the network does become susceptible to bad local minima over the weight space."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "In|Dauphin et al. (2013) a conjecture had been put forward for this based on insights from statistica physics which point to the scale of neural networks as a possible answer. The claim is that the erroi structure of neural networks might follow the same structure as that of random Gaussian fields whicl have been recently understood and studied in Fyodorov & Williams (2007); Bray & Dean (2007) The critical points of these functions, as the dimensionality of the problem increases, seem to have a particularly friendly behaviour where local minima align nicely close to the global minimum of the function.Choromanska et al.[(2015) provides a study of the conjecture by mapping deep neura models onto spin-glass ones for whom the above structure holds. These work has been extendec further (see Section2|for a review of the topic).\nWe believe many of these results do not trivially extend to the case of finite size datasets/finite size models. The learning dynamics of the neural network in this particular case can be arbitrarily bad Our assertions are based on constructions of counterexamples that exploit particular architectures. the full domain of the parameters and particular datasets.\nOne view, that can be dated back toBaldi & Hornik(1989), about why the error surface of neural networks seems well behaved is the one stated in Dauphin et al.(2013). We would refer to this hypothesis as the \"no bad local minima\" hypothesis. In Baldi & Hornik (1989) it is shown that an MLP with a single linear intermediate layer has no local minima, only saddle points and a global minimum. This intuition is carried further by Saxe et al.(2014; 2013), where deep linear models are studied. While, from a representational perspective, deep linear models are not useful, the hope is"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Deep Learning (LeCun et al. 2015}Schmidhuber 2015) is a fast growing subfield of machine. learning, with many impressive results. One particular criticism often brought up against this family. of models is the fact that it relies on non-convex functions which are optimized using local gradient descent methods. This means one has no guarantee that the optimization algorithm will converge to. a meaningful minimum or even that it will converge at all. However, this theoretical concern seems. to have little bearing in practice.\nthat the learning dynamics of such models can be mathematically understood while still being ricl. enough to mirror the dynamics of nonlinear networks. The findings of these works are aligned witl Baldi & Hornik(1989) and suggest that one has only to go through several saddles to reach a globa. minimum.\nThese intuitions are expressed clearly for generic deep networks inDauphin et al.(2013). The key. observation of this work is that intuitions from low-dimensional spaces are usually misleading when. moving to high-dimensional spaces. The work makes a connection with deep results obtained in statistical physics. In particular Fyodorov & Williams (2007); Bray & Dean[(2007) showed, using. the Replica Theory (Parisi]20o7), that random Gaussian error functions have a particular friendly. structure. Namely, if one looks at all the critical points of the function and plots error versus the (Morse) index of the critical point (the number of negative eigenvalues of the Hessian), these points. align nicely on a monotonically increasing curve. That is, all points with a low index (note that every. minimum has this index equal to O) have roughly the same performance, while critical points of high. error implicitly have a large number of negative eigenvalue which means they are saddle points..\nThese observations align also with the theory of random matrices (Wigner,[1958) which predicts. the same behaviour for the eigenvalues of a random matrix as the size of the matrix grows. The. claim of Dauphin et al.(2013) is that same structure holds for neural network as well, when they. become large enough. Similar claim is put forward in Sagun et al.(2014). The conjecture is very. appealing as it provides a strong argument why deep networks end up performing not only well, but. also reliably so.Choromanska et al.(2015) provides a study of the conjecture that rests on recasting. a neural network as a spin-glass model.To obtain this result several assumptions need to be made. which the authors of the work, at that time, acknowledged that were not realistic in practice. The same line of attack is taken byKawaguchi (2016).\nGoodfellow et al.(2016) argues and provides empirical evidence that while moving from the origina initialization of the model along a straight line to the solution (found via gradient descent) the loss seems to be only monotonically decreasing, which speaks towards the apparent convexity of the problem. Soudry & Carmon(2016); Safran & Shamir(2015) also look at the error surface of the neural network, providing theoretical arguments for the error surface becoming well-behaved in the case of over-parametrized models.\nA different view, presented in Lin & Tegmark (2016);Shamir (2016), aligned with this work, is tha the underlying easiness of optimizing deep networks does not simply rest just in the emerging struc ures due to high-dimensional spaces, but is rather tightly connected to the intrinsic characteristic of the data these models are run on.\nWe propose to analyze the error surface of rectified MLPs on finite datasets. The approach we take is a construction one. We build examples of datasets and model initializations that result in bad learning dynamics. In most examples we use ReLU units, as they are the most commonly used activation functions for both classification and regression tasks (e.g. in deep reinforcement learning (Mnih et al.2015, [2016)). It is worth noting, however, that the phenomena we are demonstrating are not limited in nature to ReLU setup and they manifest themselves also for non-saturating activation functions like sigmoids.\nIn Figure[1|we present 3 examples of local minima for regression using a single layer with 1, 2 anc 3 hidden rectifier units on 1-dimensional data. For the sake of simplicity of our presentation we wil describe in detail the case with 1 hidden neuron, the other two cases can be treated similarly. In cast of one hidden neuron the regression problem becomes\nn argmin (w, b, v, c) = ) (vReLU(wx;+b)+c-? w,b,U,c i=1\nx1,y1)=(5,2),x2,y2)=4,1),x3,y3)=(3,0),(x4,y4)=1,-3),(x5,y5)=-1,3)\nProposition 1. For the dataset D1 and defined in Equation (1) the point v = 1, b = -3, w = 1, c = O is a local minimum of L, which is not a global minimum.\nRemark 1. The point (1, -3, 1, 0) is a minimum, but it is not a \"strict\" minimum - it is not isolated but lies on a 1-dimensional manifold at which L = 18 instead..\nRemark 2. One could ask whether blind spots are the only reasons for bad behaviour of rectifier nets. The answer is actually negative, and as following examples show - they can be completely absent in local optima, at the same time exisiting in a global solution!\nL =14 L=000 =0.17 1 5] L=18 L=2.67 =0.67 [-13 [-1 3] [0 0] [1-1] 52 [10-3] [10-3 [4 1] [11-4] [12 -5] [11 -4] [3 0] [12-6] -2 0[1-3] 10 10 12 14 10 12 14 (a) Two local minima for 1 hidden (b) Two local minima for 2 hidden (c) Two local minima for 3 hidder neuron. neurons. neurons\nFigure 1: Local minima for ReLU-based regression. Both lines represent local optima, where the blue one is better than the red one.\nMaybe surprisingly, the global solution has a blind spot - all neurons deactivate in x3. Nevertheless.. the network still has a 0 training error. This shows that even though blind spots were used previously. to construct very bad examples for neural nets, sometimes they are actually needed to fit the dataset\nProposition 3. Let us consider a dataset D3 with d = 1, given by points (x1,y1). -1,3)x2,y2=0,0),x3,y3=1-1),x4y4=10,-3),x5,y5=11,-4)x6,y6= (12, -6) (Figure1|(c)). Then, for a rectifier network with m = 3 hidden units and a squared error loss the set of weights w = (1.5, -1.5, 1.5), b = (1, 0, -13- ), v = (1, 1, -1), c = 1 is a bet ter local minimum than the local minimum obtained for w = (-2, 1, 1), b = (3+ ?, -10, -11), v = (1, -1, -1), c = -3.\nProof. Completely analogous, using the fact that in each part of the space linear models are either optimal linear regression fits (if there is just one neuron active) or perfect (O error) fit when two neurons are active and combined.\nNote that again that the above construction is not relying on the blind spot phenomenon. The idea behind this example is that if, due to initial conditions, the model partitions the input space in a suboptimal way, it might become impossible to find the optimal partitioning using gradient descent Let us call (0o, 6) the region I, and [6, oo) region II. Both solutions in Proposition[3|are constructed in such way that each one has the best fit for the points assigned to any given region, the only difference being the number of hidden units used to describe each of them. In the local optimum two neurons are used to describe region II, while only one describes region I. Symmetrically, the better solution assigns two neurons to region I (which is more complex) and only one to region II.\nWe believe that the core idea behind this construction can be generalized (in a non-trivial way) t high-dimensional problems. We plan to extend the construction as future work.\nRemark 3. In the examples we used only ReLU (and in one case a sigmoid) activation functions, as they are the most common used in practice. The similar examples can be constructed for differen activation functions, however the constructions need some modifications and get more technically complicated.\nProposition 4. There exist an infinite amount of normalized (whitened) datasets, such that for an feed forward rectifier network architecture and an arbitrary e E [0, 1), there exists a normal distr bution used to initialize the weights and biases initialized to O such that with probability at least 1 - the gradient based techniques using log loss never achieve O training error nor they ever converg (gradient is never zero). Furthermore, this dataset can have a full rank covariance matrix and b linearly separable.\nEven though the above construction requires control over the means of the normal distributions the weights are drawn from, as one can see in Figure2 they do not have to be very large in practice. In particular, if one uses an initialization with o as prescribed byLeCun et al.(1998) or Glorot & Bengio[(2010) then the value of = 0.24 is sufficient to break the learning, even if we have 10, 000. hidden units in each of 100 hidden layers. Using fixed o = 0.01 instead fails even with = 0.07..\n1 hidden layer 100 hidden layers 0.25 0.25 1.0 3.0 2.5 0.20 0.20 hidden units 2.0 LeCun'98 0.15 0.15 0.6 2 15 Xavier'10 4 =0.005 10 0.10 0.10 0.4 6 =0.010 0.5 0.05 0.05 0.2 0.0 0.5 . 0.00 0.00 0.0 0 2000 4000 6000 8000 10000 0 2000 4000 6000 8000 10000 0 5 10 15 20 25 0.50.0 0.5 1.0 1.5 2.0 2.5 3.0 hidden units hidden units 1og10(number of layers)\nFigure 2: On the left: exemplary dataset constructed in Proposition 4] color denotes label. Two middle ones: how big the mean of the normal distribution N(, o2) has to be in order to have at least 99% probability of the effect (very bad local minima) described in the Proposition4] as a function of number of hidden units in 2 - h - ... - h - 1 classification network. By LeCun'98 -).In both cases the original papers used = 0. Rightmost one: Proposition 5] probability of learning failing with increasing number of layers when the initialization is fully correct.\nIt is worth noting that even though this observations is about the existence of such dataset, our proof is actually done by construction, meaning that we show a way to build infinite amount of such datasets (as opposed to purely existential proofs). We would like to remark that it was well known that the initialization was important for the behaviour of learning (Glorot & Bengio2010] LeCun et al.]1998; [Sutskever et al.2013 Pascanu et al.]2013]. Here we are exploiting these ideas in order to better understand the error surface of the model.\nIf we do not care about the lack of convergence, and we are simply interested in learning failure, we can prove an even stronger proposition, which works for every single dataset:\nProposition 5. For every dataset, every feed forward rectifier network built for it, and every distribu-. tion used to initialize both weights and biases such that E[w] = 0, E[6] = 0, Var[w] > 0, Var[6] > O, the probability that the gradient based training of any loss function will lead to a trivial model. (predicting the same label for all datapoints) goes to 1 as the number of hidden layers goes to. infinity.\n1 hidden layer. 100 hidden layers 0.25 0.25 . 10 3.0 2.5 0.20 0.20 hidden units. 2.0 LeCun'98 0.15 0.15 % 0.6 2 Xavier'10 1.5 4 = 0.005 1.0 0.10 0.10 0.4 6 =0.010 0.5 0.05 0.05 0.2 0.0 . : 0.5 0.00 0.00 0.0 0 2000 4000 6000 8000 10000 0 2000 4000 6000 8000 10000 0 5 10 15 20 25 30 0.50.0 0.5 1.0 1.5 2.0 2.5 3.0 hidden units. hidden units. log10(number of layers)\nWe can extend the previous proposition to show that for any regression dataset a rectifier model ha. at least one local minimum with a large basin of attraction (over the parameter space). Again, we rel. on the blind spots of the rectified models. We show that there exists such blind spot that corresponc. to a region in parameter space of same dimensionality (codimension O). The construction relies o. the fact that the dataset is finite. As such, it is bounded, and one can compute conditions for th. weights of any given layer of the model such that for any datapoint all the units of that layer ar. deactivated. Furthermore, we show that one can obtain a better solution than the one reached froi such a state. The formalization of this result is as follows..\nN L((Wn)n=1,(bn arg min k-D)=)`[M(xi) - (Wn)n=1,(bn)n=1 i=1\nLet us state two simple yet in our opinion useful Lemmata.\nLemma 2. If there holds Wjx, < -b1 for all i-s, then the model M has a constant output Moreover, applying local optimization does not change the values of W1, b1.\nProof. Straightforward from the definitions\nCombining these two lemmata yields:\nWe will denote M( mean of the numbers a1 ,aL\nWe start our examples with experiments using MNIST dataset which show that bad initializatior can lead to significant obstacles in the training process..\nWe consider a k-layer deep regression model using m ReLU units ReLU(x) = max(0,x) Our dataset is a collection (xi,yi) E Rd R, i = 1,...,N. We denote hn(x;) ReLU(Wnhn-1(x;) + bn) where the the ReLU functions are applied component-wise to the vec-. tor Wnhn-1(xi) and ho(x) = x. We also denote the final output of the model by M(xi) =. Wkhk-1 + bk. Solving the regression problem means finding.\nO is a local minimum of the error surface ii) if the first layer contains at least 3 neurons and if the dataset (xi, yi) is decent, then O is not a global minimum."}, {"section_index": "3", "section_name": "4.1 BAD INITIALIZATION ON MNIST", "section_text": "Figure [3|shows the training error of rectified MLP on the MNIST dataset for different seeds and. different model sizes. The learning algorithm used is Adam (Kingma & Ba2014) and everything. except initialization, when specifically stated, follows an accepted protocol (see Appendix[A). The. results show that models that are not initialized in a good interval do not seem to converge to a good. solution of the problem even after 1,000,000 updates. Depth does not seem to be able to resolve the bad initialization of the model. The bottom row experiments are similar to those presented. in Zhang et al.[(2017), though more limited in their scope. They explore the correlation between. the structure in the data and learning, and, at least in appearance, they do not seem to support our. working hypothesis that the structure is essential. It is worth noticing though that the initialization. is even more important in that setting; destroying the structure makes the model significantly more susceptible to bad initializations than when trained on the data with unpermuted labels (second. column of Figure 3 the network requires at least 400 units to be able to achieve O training error)..\nw~ N(10, 0.01), w~ N(0, 0. 01), w, b~ N(0, 0. 01) w,b~N(0,1) b~ N(0, 0. 01) b~N(10,0.01) w,b~N(-1,1) 1.2 2 layer model 5 layer model 0.8 MNIST 0.6 0.4 ,0.2 0.0 500 10001500 2000 0 5001000 1500 2000 0 5001000 1500 2000 5001000 1500 2000 5001000 1500 2000 # hidden units # hidden units # hidden units # hidden units # hidden units w~N(-10, 0.01), w~ N(0, 0. 01), w, b~ N(0, 0. 01) w,b~N(0,1) b~ N(0, 0. 01) b~N(-10, 0.01) w,b~N(-1,1) 1.2 A 2 layer mode! ereenner eeeee) shhited 1.0 5 layer model I 0.8 0.6 MNIST 0.4 0.2 0.0 5001000 15002000 500 1000 1500 2000 500 100015002000 0 500 1000 1500 2000 500 100015002000 # hidden units # hidden units # hidden units # hidden units # hidden units\nFigure 3: Plots of final training accuracy on MNIST dataset after 1,ooo,o00 updates. Each point is. a single neural net (blue triangles - 5 layer models with same number of hidden units in each layer. red triangles - 2 layer models with same number of hidden units in each layer). The title of each column shows the distribution used to initialize weights (w) and biases (b). Top row shows results on MNIST, bottom row shows results when the labels of MNIST had been randomly permuted. The. number of hidden units per layer is indicated on x-axis..\nThe bad initializations used in these experiments are meant to target the blind spots of the rectifiers The main idea is that by changing the initialization of the model (the mean of the normal distribution used to sample weights) one can force all hidden units to be deactivated for the most or for all examples in the training set. This prevents said examples from being learned, even though the task might be linearly separable. The construction may seem contrived, but it has important theoretical consequences. It shows that one can not prove well behaved learning for finite sized neural networks applied to finite sized data, without taking into account the initialization or data. We formalize this idea in the Proposition4] making the observation that the effect can be achieved by either changing the initialization of the model, or the data. In particular, by introducing specific outliers, one can force most of the data examples in the dataset to be in the blind spot of the neural network, despite being whitened.\nDetails of the experimental setup are given in Appendix[A] The results presented in Figure[3](botton row), suggest that (from an optimization perspective) the important relationship is not only the on. between the inputs and targets, but also between the inputs and the way the model partitions the inpu. space (in here we focus on rectifier models which are, from a mathematical perspective, piece-wise. linear functions). To empirically test if this is a viable hypothesis we consider the MNIST dataset. where we scale the inputs by a factor t. The intuition is not to force the datasets into the blind spc of the model, but rather to concentrate most of the datapoints in very few linear regions (given b.\nT=1 T=.001 T=.0001 T =.00001 1.0 1.0 1.0 1.0 rreannee eeeey) 0.8 0.8 0.8 0.8 0.6 0.6 0.6 0.6 0.4 0.4 0.4 0.4 0.2 0.2 0.2 0.2 0 200 400 600 8001000120014001600 0 200 400 600 8001000120014001600 0 200 400 600 800 1000120014001600 0 200 400 600 800 1000120014001600 # hidden units # hidden units # hidden units # hidden units\nT=1 T=.001 T=.0001 T=.00001 1.0 1.0 1.0 1.0 0.8 0.8 0.8 ... 0.6 0.6 0.6 0.6 0.4 0.4 0.4 0.4 0.2 0.2 0.2 0.2 0 200 400 600 8001000120014001600 0 200 400 600 800 1000120014001600 0 200 400 600 8001000120014001600 0 200 400 600 800 100012001400160 # hidden units # hidden units # hidden units # hidden units\nFigure 4: Plots of the fina1 train accuracy on scaled MNIST dataset after 1,200,000 updates of a single hidden layer neural net. The title of each column shows the scaling factor applied to the data..\nthe initialization of the MLP). While these results do not necessarily point towards the model bein locked in a bad minimum, they suggest that learning becomes less well behaved (see Fig.4).\nAdditional results on a simple Zig-Zag regression task are given Figure 5] The dataset itself is. in the left panel, the results are visualized in the right panel. Similarly as in the MNIST case.. the experiments suggest that as data becomes more concentrated in the same linear regions (of the. freshly initialized model) the learning becomes really hard, even if the model has close to 3000 units\n=.01 T=.001 T= .01 0.6 ground truth 0.25 MSE =0.225 0.25 0.4 MSE = 0.209 0.20 ). MSE = 0.194 raan 0.15 0.15 0.15 0.0 9. T 0.10 0.10 0.10 N 7 0.05 0.05 0.05 0.4 .: 0.00 0.00 0.00 0.8.010 500 1000 1500 2000 2500 3000 500 1000 1500 2000 2500 3000 500 1000 1500 2000 2500 3000 0.005 0.000 0.005 0.010 # hidden units. # hidden units. # hidden units.\nT=.01 T= 1 T=.01 T=.001 0.6 ground truth MSE = 0.225 0.25 0.25 0.4 MSE = 0.209 0.20 0.2 0.20 0.20 MSE=0.194 2 0.15 0.15 0.15 0.0 T 0.10 0.10 Z-6! 0.10 0.2 7 0.05 0.05 0.05 7 0.4 0.00 0.00 0.00 -0-8.010 500 1000 1500 2000 2500 3000 500 1000 1500 2000 2500 3000 500 1000 1500 2000 2500 3000 0.005 0.000 0.005 0.010 # hidden units # hidden units # hidden units\nFigure 5: Plots of training MSE error on the Zig-Zag regression task after 2,ooo,o00 updates. See caption of Figure 4|for more details. The left panel depicts the Zig-Zag regression task with three found solutions for t = 0.01. The actual datapoints are shown by the diamond shaped dots\n4.2 THE JELLYFISH - SUBOPTIMAL MODELS IN CLASSIFICATION USING RELU AND SIGMOIDS\nTo improve our understanding of learning dynamics beyond exploiting blind spots, we look at one. of the most theoretically well-studied datasets, the XOR problem. We analyze the dataset using a single hidden layer network (with either ReLU or sigmoid units)..\nA first observation is that while SGD can solve the task with only 2 hidden units, full batch methods do not always succeed. Replacing gradient descent with more aggressive optimizers like Adam does not seem to help, but rather tends to make it more likely to get stuck in suboptimal solutions (Table1).\nFigure 6: Examples of different outcomes of learning on the Jellyfish dataset.\nBy exploiting observations made in the failure modes observed for the XOR problem, we were able to construct a similar dataset, the Jellyfish, that results in suboptimal learning dynamics. The dataset\nHidden 1 activation Hidden 2 activation Output activation Classification Hidden 1 activation Hidden 2 activation Output activation Classification (a) Optimally converged net for Jellyfish (b) Stuck net for Jellyfish\ntivation Hidden 2 activation Output activation Cassification Hidden 1 activation Hidden 2 activation Output activation Cassification\nTable 1: \"Convergence\" rate for 2-h-1 network with random initializations on simple 2-dimensional datasets using either Adam or Gradient Descent (GD) as an optimizer."}, {"section_index": "4", "section_name": "5 DISCUSSION", "section_text": "Previous results (Dauphin et al.|. 2013 Saxe et al. 2014; Choromanska et al. 2015) provide in ightful description of the error surface of deep models under general assumptions divorced fron he specifics of the architecture. While such analysis is very valuable not only for building up the. ntuition but also for the development of the tools for studying neural networks, it only provides. one facade of the problem. In this work we move from the generic to the specific. We show tha or finite sized models/finite sized datasets one does not have a globally good behaviour of learning egardless of the model size (and even of the ratio of model size to the dataset size)..\nThe overwhelming amount of empirical evidence points towards learning being well behaved in practice. We argue that the way to reconcile these observations is to show that the well-behaved. learning dynamics are local and conditioned on the data structure, initialization and perhaps on other architectural choices. One can imagine a continuum ranging from the very specific, where ev-. ery detail of the setup is important to attain good learning dynamics, to the generic, where learning. is globally well behaved regardless of dataset or initialization. We believe that an important step. forward in the theoretical study of the neural networks can be made by identifying where this class. of models falls on that continuum. In particular, what are the most generic sets of constraints that. need to be respected in order to attain the good behaviour. Our results focus on constructing coun-. terexamples which result in a bad learning dynamics. While this does not lead directly to sufficient. conditions for well-behaved systems, we hope that by carving out the space of possible conditions. we are moving forward towards that goal..\nSimilar toLin & Tegmark (2016) we put forward a hypothesis that the learning is only well behaved conditioned on the structure of the data. We point out, that for the purpose of learning, this structure can not be divorced from the particular initialization of the model. We postulate that learning be comes difficult if the data is structured such that there exist regions with a high density of datapoints (that belong to different classes) and the initialization results in models that assign these points to very few linear regions. While constraining the density per region alone might not be sufficient, it can provide a good starting point to understand learning for rectifier models. Another interesting question arising in that regard is what are the consequences on overfitting for enforcing a relatively low density of points per linear regions? Understanding of the structure of the error surface is ar extremely challenging problem. We believe that as such, in agreement with a scientific tradition, it should be approached by gradually building up a related knowledge base, both by trying to obtain positive results (possibly under weakened assumptions, as it was done so far) and by studying the obstacles and limitations arising in concrete examples.\nh XOR XOR Jellyfish Jellyfish XOR XOR Jellyfish Jellyfish ReLU Sigmoid ReLU Sigmoid ReLU Sigmoid ReLU Sigmoid 2 Adam 28% 79% 7% 0% GD 23% 90% 16% 62% 3 Adam 52% 98% 34% 0% GD 47% 100% 33% 100% 4 Adam 68% 100% 50% 2% GD 70% 100% 66% 100% 5 Adam 81% 100% 51% 27% GD 80% 100% 68% 100% 6 Adam 91% 100% 61% 17% GD 89% 100% 69% 100% 7 Adam 97% 100% 69% 58% GD 89% 100% 86% 100%\nis formed of four datapoints, where the positive class is given by [1.0, 0.0], [0.2, 0.6] and the negative one by 0.0, 1.0], [0.6, 0.2]. The datapoints can be seen in the Figure[6.\nCompared to the XOR problem it seems the Jellyfish problem poses even more issues, especially for ReLU units, where with 4 hidden units one still only gets 2 out of 3 runs to end with O training error (when using GD). One particular observation (see Figure 6) is that in contrast with good solutions. when the model fails on this dataset, its behaviour close to the datapoints is almost linear. We argue hence, that the failure mode might come from having most datapoints concentrated in the same linear region of the model (in ReLU case), hence forcing the model to suboptimally fit these points."}, {"section_index": "5", "section_name": "REFERENCES", "section_text": "Baldi, P. and Hornik, K. Neural networks and principal component analysis: Learning from exam ples without local minima. Neural Networks, 2(1):53-58, 1989\nBray, Alan J. and Dean, David S. Statistics of critical points of gaussian fields on large-dimensiona spaces. Physics Review Letter, 98:150201, Apr 2007.\nDauphin, Yann, Pascanu, Razvan, Gulcehre, Caglar, Cho, Kyunhyun, Ganguli, Surya, and Bengio Yoshua. Identifying and attacking the saddle point problem in high dimensional non-convex optimization. NIPS, 2013.\nKawaguchi, Kenji. Deep learning without poor local minima. CoRR, abs/1605.07110, 2016\nLeCun, Yann, Bottou, Leon, Orr, Genevieve B., and Muller, Klaus-Robert. Efficient backprop. Ir Neural Networks: Tricks of the Trade. 1998\nLeCun, Yann, Bengio, Yoshua, and Hinton, Geoffrey. Deep learning. Nature, 521(7553):436-444 5 2015. ISSN 0028-0836. doi: 10.1038/nature14539\nSafran, Itay and Shamir, Ohad. On the quality of the initial basin in overspecified neural networks CoRR, abs/1511.04210, 2015\nMnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Rusu, Andrei A, Veness, Joel, Bellemare. Marc G, Graves, Alex, Riedmiller, Martin, Fidjeland, Andreas K, Ostrovski, Georg, et al. Human level control through deep reinforcement learning. Nature, 518(7540):529-533, 2015.\nParisi, Giorgio. Mean field theory of spin glasses: statistics and dynamics. Technical Report Arxiv 0706.0094, 2007.\nPascanu, Razvan, Mikolov, Tomas, and Bengio, Yoshua. On the difficulty of training recurrent neural networks. In ICML'2013. 2013\nWigner, Eugene P. On the distribution of the roots of certain symmetric matrices. The Annals o Mathematics, 67(2):325-327, 1958\nZhang, Chiyuan, Bengio, Samy, Hardt, Moritz, Recht, Benjamin, and Vynalis, Oriol. Understanding deep learning requires rethinking generalization. In Submitted to Int'l Conference on Learning Representations, ICLR, 2017.\nFor the experiments depicted in Figure 3|we used MNIST dataset with data divided by 255 and one-hot encoded labels and we:\nFor the experiments depicted in Figure 4|we used MNIST dataset with data divided by 255 anc one-hot encoded labels and we:\nSoudry. Daniel and Carmon. Yair. No bad local minima: Data independent training error guarantees for multilayer neural networks. CoRR, abs/1605.08361, 2016.\nran 100 jobs, with the number of hidden units h sampled from [100, 2000] jointly for a the hidden layers, meaning that the model was 784 - h - h ... - h 10, used Adam as an optimizer, with learning rate of 1e - 4 (and other arguments default) each job was ran for 1,000,000 updates, batch size used was 200.\nran 400 jobs, with the number of hidden units h sampled from [100, 2000] for MNIST experiments and [100, 3000] for the Zig-Zag problem, we used Adam as an optimizer with learning rate randomly sampled to be either 1e - 4 or 1e - 3 (and other arguments default), each job was ran for 2,000,000 updates for the Zig-Zag problem and 1,200,000 updates in the MNIST case, batch size used was 10 for the Zig-Zag problem (dataset size is 20 points) and 50 for MNIST experiment. Initialization of weights and biases was from a Gaussian with O mean and standard devia- tion Vfanin"}, {"section_index": "6", "section_name": "B.1 PROOF OF PROPOSITION4", "section_text": "For simplicity, let us consider a network with only one hidden layer of h units and a binary classi fication task, implying we have a single output neuron. Analogous result holds for any number of. classes and for any arbitrary depth of the network as well. We use the following notation: h(x. is a vector of activations of hidden units when presented with i-th training sample, M(x) is the. activation of the output neuron given the same sample, W is matrix of hidden weights, b is a vecto. of biases in the hidden layer and finally v, c are the weights and bias in the output layer. The whole. classification becomes\nLet us now consider a dataset where N - 1 points have all features negative, and a single point denoted x;* (with positive label) which has all the features positive. We can always find such points that their coordinate-wise mean is equal to 0 and their standard deviation equals 1, since we can place all N - 1 points very close to the origin, and the point x,* arbitrary far away in the positive part of the input space. Our dataset is therefore normalized (whitened) and it can have a full rank covariance matrix (since the construction does not depend on nothing besides signs of the features). We want to compute\nSince, by construction Vii*X; < O and x,* > 0, it is also true that if all the weights W are non. negative (and at least one is positive), then all the activations (after ReLU) of the hidden units wil be 0 besides the one positive activation of h(x;*) which comes directly from the assumption tha. biases are initialized to d1 Consequently\nP(V;i*h(x;)= 0^ h(x;*) > 0) > P(W > 0)\nP(Vii*h(xi) = 0^ h(x;*) > 0) P(W > 0)\nand given that weights initializations are independent samples from the same distribution we get\ndh dh P(W >0) =II i=1\nwhere , o are parameters of the distribution we use to initialize weights and d is input space di mensionality. All that is left is to show that during any gradient based optimization these weight will not be corrected, which requires one more assumption - that the output weights are positive as well. If this is true, then Vii* M(x) = 0 (again using that the output bias is zero) and M(x*) > ( and of course P(v > 0) = (So~ N(, o2))h. Now we have a fully initialized model which maps all the samples to O, and one positive sample to some positive value. Consequently, given the fac that we use log loss, there holds d/dvx > 0 and d/dwrl O for all k,l. Indeed, since these changes are all increasing the probability of good classification of x* and all remaining points are ir inactive part of ReLUs thus they cannot contribute to partial derivatives. Therefore, during any gra dient based (as well as stochastic and mini-batch based) learning the projection of samples mappec to 0 will not change, and the projection of x* will grow to infinity (so the sigmoid approaches 1) Consequently we constructed an initialization scheme which with probability at least\ngives the initial conditions of the net, where despite learning with log loss, we always classify all arbitrary labeled N - 1 points to the same label (since they are all mapped to the same output value and we classify the unique positive sample with the valid label. Furthermore - the optimization never finishes, and there exists a network with a better accuracy, as we can label N -- 1 points in any. manner, including making it linearly separable by labeling according to the first feature only..\nIn order to generalize to arbitrary number of layers, we would similarly force all the weights to be positive, thus with parametrization of k-layer deep network 0 = (Wn) we would simply get\ndh+(k-2)h2+h P(V1<n<kWn> 0) = I P(Wn> 0) n=1\n1 We can still construct similar proof for biases taken from normal distributions as well.\nM(x) = vReLU(Wx;+ b) + c\nP(Vii*h(xi) = 0^ h(xi*) > O)\ndh+h P(W,v>0) = P(W>0)P(v> 0) 1 - e,\nand finally, if the biases are not zero, but have some some arbitrary values (fixed or sampled) we simply adjust the size of the weights accordingly. Instead of having them bigger than O we would compute probability of having them big enough to make the whole first layer to produce Os for every point besides x,* and analogously for the remaining layers.\nFurthermore, it is possible to significantly increase the probability of failure if we do not mind the. situation in which the learning process is not even starting. Proposition5laddresses this situation."}, {"section_index": "7", "section_name": "B.2 PROOF OF PROPOSITION5", "section_text": "Proof. Let us notice, that since we are using ReLU activations, the activations on jth hidden layer h are non-negative. Consequently, if the following layer has only negative weights and non-positive biases, then h;+1 = b+1 (as all the ReLUs are inactive), so the network has to output exactly the same value for every single point. Furthermore, during the gradient based learning we will neve change those weights as gradients going through ReLUs will be equal to zero.\nLet us now consider a deep rectifier network with k hidden layers with h neurons each. If only k > 2 we can use the above observation to compute for every j > 1:.\nand consequently, due to the assumptions about expected values\nAs every layer has the same number h of neurons,the values (bi > 0 V , Wj: 0) are equal for every i. Therefore\nDue to assumptions about distributions of biases and weights we know that\n0 < cdfw(0) < 1,0< cdfp(0) <\nClaim i) is a direct consequence of Corollary[1] It remains to prove ii). For that it is sufficient to show an example of a set of weighs 0 = ((Wn)n=1, (bn)n=1) such that ((Wn)n=1, (bn)n=1) > L((Wn)n=1, (bn)n=1). Let r be such that M({yp : xp = xr}) M({yp : p = 1,..., N}). Such point exists by assumption that the dataset is decent. Let H be a hyperplane passing through x, such that none of the points xs / x, lies on H. Then there exists a vector v such that |vT (xs xr) > 2 for all xs xr. Let y = v' xr. We define Wi in such a way that the first row of Wi is v , the second row is 2v and the third one is v again, and if the first layer has more than 3 neurons, we put all the remaining rows of W1 to be equal zero. We choose the first three biases of b1 to be -y + 1. 2y and - - 1 respectively. We denote = M({yp : xp xr}) and v = M({yp : xp = xr}) We then choose W, to be a matrix whose first row is (v , - v, v - , 0, . .. , O) and the other\nn layers) P(>1All neurons inactive in hidden layer j P(3j>1Vl,ibjl 0^ Wji < 0) =1- P(Vj>13ibjl> 0 V3;Wji O) k-1 = 1 - 11 P(3ibjl > 0 V 3;Wji 0)\nZ P(3j>1Vl,ibjl < 0^ Wji < 0) =1-P(Vi>13bjl>0V3;Wji 0 k-1 j=1\nk-1 1-IIP(3bjt>0V3;Wji>0) =1-P(3;b1>0V3;W1;0)k-1 j=1 =1-(1-P(V{b1i0^W1i<0))k-1 =1-[1- cdfb(0)'cdfw(0)h2|k-1\nlim 1 - [1 - cdfu(O)hcdfw(O)h21k-1 = 1. k>0\nrows are equal to 0. Finally, we choose the bias vector b2 = (, 0, . .., 0)7 If our network has only one layer the output is\n(v )ReLU(vTx, v - )ReLU(2vT xp 2y) +(v-)ReLU(vxp -1)+\nL(1+Ow,-3+0b,1+Ov,0c)=\n((1+ 0u)(3+ 3ow-3+ 0b) +c) F ((1 +dD(4+40w-3+db) +dc-D -\n3x+y+z)+(4x+y+z)2+(5x+y+z)2+2z2 + 18 > 18\nC(1 + Ow.-3+ &p.1 + Oa.8c) d+((1+0(4+40w-3+Ob)+Oc\n(4x+y+z)+(5x+y+z + 3z + 18 > 18\nL((Wn)n=1,(bn)n=1)=(yi-)2+ (yi -v)2 XpFxr Xp=Xr yi-)2+(yi-)2=(yi-p)=L((Wn)n=1(bn)n=1) Xn Yi\n((1+0u)(5+50w-3+0b)+0c-2)+(8c+3)2+(8c-3)2\n((1+0u)(5+58w-3+0b)+0c-2)2+(c+3)2+(8c-3)2="}] |
rJQKYt5ll | [{"section_index": "0", "section_name": "STEERABLE CNNs", "section_text": "Taco S. Cohen\nIt has long been recognized that the invariance and equivariance properties of a representation are critically important for success in many vision tasks. In this paper we present Steerable Convolutional Neural Networks, an efficient and flex ible class of equivariant convolutional networks. We show that steerable CNN achieve state of the art results on the CIFAR image classification benchmark. The mathematical theory of steerable representations reveals a type system in which any steerable representation is a composition of elementary feature types, each one associated with a particular kind of symmetry. We show how the parameter cost of a steerable filter bank depends on the types of the input and output features and show how to use this knowledge to construct CNNs that utilize parameters ef- fectively."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Much of the recent progress in computer vision can be attributed to the availability of large labelled datasets and deep neural networks capable of absorbing large amounts of information. While many practical problems can now be solved, the requirement for big (labelled) data is a fundamentally unsatisfactory state of affairs. Human beings are able to learn new concepts with very few labels. and reproducing this ability is an important challenge for artificial intelligence research. From an applied perspective, improving the statistical efficiency of deep learning is vital because in many domains (e.g. medical image analysis), acquiring large amounts of labelled data is costly.\nTo improve the statistical efficiency of machine learning methods, many have sought to learn invari ant representations. In deep learning, however, intermediate layers should not be invariant, because the relative pose of local features must be preserved for further layers (Cohen & Welling, 2016 Hinton et al., 2011). Thus, one is led to the idea of equivariance: a network is equivariant if the representations it produces transform in a predictable way under transformations of the input. Ir other words, equivariant networks produce representations that are steerable. Steerability makes i possible to apply filters not just in every position (as in a standard convolution layer), but in every pose, thus allowing for increased parameter sharing.\nPrevious work has shown that equivariant CNNs yield state of the art results on classification tasks (Cohen & Welling, 2016; Dieleman et al., 2016), even though they only enforce equivariance to small groups of transformations like rotations by multiples of 90 degrees. Learning representations that are equivariant to larger groups is likely to result in further gains, but the computational cost of current methods scales linearly with the size of the group, making this impractical. In this paper. we present the general theory of steerable CNNs, which covers previous approaches but also shows how the computational cost can be decoupled from the size of the symmetry group, thus paving the. way for future scaling.\nTo better understand the structure of steerable representations, we analyze them mathematically We show that any steerable representation is a composition of low-dimensional elementary feature types. Each elementary feature can be steered independently of the others, and captures a distinct characteristic of the input that has an invariant or \"objective\"' meaning. This doctrine of \"observer- independent quantities\" was put forward by (Weyl, 1939, ch. 1.4) and is used throughout physics. It has been applied to vision and representation learning by Kanatani (1990); Cohen (2013)."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "The mentioned type system puts constraints on the network weights and architecture. Specifically, since an equivariant filter bank is required to map given input feature types to given output feature types, the number of parameters required by such a filter bank is reduced. Furthermore, by the same logic that tells us not to add meters to seconds, steerability considerations prevent us from adding features of different types (e.g. for residual learning (He et al., 2016a)).\nThe rest of this paper is organized as follows. The theory of steerable CNNs is introduced in Section 2. Related work is discussed in Section 3, which is followed by classification experiments (Section 4) and a discussion and conclusion in Section 5."}, {"section_index": "3", "section_name": "2.1 FEATURE MAPS AND FIBERS", "section_text": "Consider a 2D signal f : Z2 -> RK with K channels. The signal may be an input to the network or a feature representation computed by a CNN. Since signals can be added and multiplied by scalars. the set of signals of this signature forms a linear space F. Each layer of the network has its own feature space Fi, but we will often suppress the layer index to reduce clutter.\nIt is customary in deep learning to describe f E F as a stack of feature maps fk (for k = 1, . .., K) In this paper we also consider another decomposition of F into fibers. The fiber Fx at position x in. the \"base space' Z2 is the K-dimensional vector space spanned by all channels at position x. Thus,. f E F is comprised of feature vectors f(x) that live in the fibers F, (see Figure 1(a))..\n7or\nFigure 1: Feature maps, fibers, and the transformation law o of Fo"}, {"section_index": "4", "section_name": "2.2 STEERABLE REPRESENTATIONS", "section_text": "Let (F, ) be a feature space with a group representation and : F -> F' a convolutional network The feature space F' is said to be (linearly) steerable with respect to G, if for all transformations g E G, the features f and (g).f are related by a linear transformation '(g) that does not depend on f. So '(g) allows us to \"steer\" the features in F' without referring to the input in F from which they were computed.\nCombining the definition of steerability (i.e. (g) = '(g)) with the fact that is a grou representation, we find that ' must also be a group representation:.\n'(gh)f =(gh)f =(g)(h)f = '(g)(h)f ='(g)'(h)f\n(b) An image f E Fo is rotated by r using no(r)\nno(g)fl(x) = f(q-'x\nAn important property of o is that o(gh) = o(g)o(h). Here, gh means composition of transfor- mations in G, while o(g)o(h) denotes matrix multiplication. A vector space such as Fo equipped with a set of linear operators no satisfying this condition is known as a group representation (or just. representation, for short). A lot is known about group representations (Serre, 1977), and we will make extensive use of the theory, explaining the relevant concepts as needed..\nTo(mr no(m 0 T(mr)\nFigure 2: Diagram showing the structural consistency that follows from equivariance of the network and the group representation structure of o. The result of following any path in this diagram depends only on the beginning and endpoint but is independent of the path itself, c.f. eq. 2\nThat is, '(qh) = '(q)'(h) (at least in the span of the image of ). Figure 2 gives an illustration\nFor simplicity, we will restrict our attention to discrete groups of transformations. The theory for continuous groups is almost completely analogous. Our running example will be the group p4m which consists of translations, rotations by 90 degrees around any point, and reflections. We further restrict our attention to groups that are constructed' from the group of translations Z2 and a group H of transformations that fixes the origin 0 E Z2. For p4m, we have H = D4, the 8-element group of reflections and rotations about the origin."}, {"section_index": "5", "section_name": "2.3 EOUIVARIANT FILTER BANKS", "section_text": "A filter bank can be described as an array of dimension (K',K, s,s), where K, K' denote the. number of input / output channels and s is the kernel size. For our purposes it is useful to think. of a filter bank as a linear map : F > RK' that takes as input a signal f E F and produces. a K'-dimensional feature vector. The filter bank only looks at an s s patch in F, so the matrix representing I has shape K' K . s2. To correlate a signal f using I, one would simply apply I. to translated copies of f, producing the output signal one fiber at a time..\nWe assume (by induction) that we have a representation that allows us to steer F. In order to make the output of the convolution steerable, we need the filter bank I : F -> RK' to be H-equivariant:.\nWe assume (by induction) that we have a representation that allows us to steer F. In order to make\np(h) = (h). Vh E I\nfor some representation p of H that acts on the output fibers (see Figure 3). Note that we only require equivariance with respect to H (which excludes translations) and not G, because translations can move patterns into and out of the receptive field of a fiber, making full translation equivariance impossi- ble.\nThe space of maps satisfying the equivariance constraint is de-. noted Hom(, p), because an equivariant map I is a \"ho-. momorphism of group representations\", meaning it respects. the structure of the representations. Equivariant maps are also sometimes called intertwiners (Serre, 1977).\nSince the equivariance constraint (eq. 3) is linear in I, the. P1represents th tion r by a permi space Hom (, p) of admissible filter banks is a vector space:. any linear combination of maps I, I' E Hom(, p) is again. cyclicly shifts th an intertwiner. Hence, given and p, we can compute a basis for Hom(, p) l System.\nas a semi-direct product\nTo(mr) T1(mr)\nUsing this division, we can first construct a filter bank that generates H-steerable fibers, and then. show that convolution with such a filter bank produces a feature space that is steerable with respeci to the whole group G.\n(3) To(r) P1r\nsee Figure 3). Note that we only require equivariance with. espect to H (which excludes translations) and not G, because To(r anslations can move patterns into and out of the receptive eld of a fiber, making full translation equivariance impossi-. le. he space of maps satisfying the equivariance constraint is de-. oted Homy(, p), because an equivariant map is a \"ho-. P1(r)\nFigure 3: A filter bank that is H-equivariant. .In this example. P1 represents the 90-degree rota- tion r by a permutation matrix that cyclicly shifts the 4 channels.\nComputation of the intertwiner basis is done offline, before training. Once we have such a basi V1,..., Yn for Hom(, p), we can express any equivariant filter bank as a linear combinatioi = , Q;, using parameters a;. As shown in Section 2.8, this can be done efficiently even ir. high dimensions."}, {"section_index": "6", "section_name": "2.4 INDUCTION", "section_text": "As stated before. the correlation + f could be computed by translating f before applying\n*f](x)=z(x)-f]\nWhere x E Z2 is interpreted as a translation when given as input to .\n+[(tr)fl](x) = p(r) |[* f]((tr)--x)\n'(tr)f](x)=p(r)f((tr)-'x)\nWhen parsing eq. 6, it is important to keep in mind that (as. indicated by the square brackets) r' acts on the whole feature space F' while p acts on individual fibers..\nIf we compare the induced representation (eq. 6) to the repre sentation o defined in eq. 1, we see that the difference lies only in the presence of a factor p(r) applied to the fibers This factor describes how the feature channels are mixed by the transformation. The color channels in the input space do not get mixed by geometrical transformations, so we say that no is induced from the trivial representation oo(h) = I.\nNow that we have a G-steerable feature space F', we can. iterate the procedure by computing a basis for the space of intertwiners between r' (restricted to H) and some p' of our choosing."}, {"section_index": "7", "section_name": "2.5 FEATURE TYPES AND CHARACTER THEORY", "section_text": "By now, the reader may be wondering how to choose p, or indeed what the space of representations that we can choose from looks like in the first place. We will answer these questions in this section by showing that each representation has a type (encoded as a short list of integers) that corresponds to a certain symmetry or invariance of the feature. We further show how the number of parameters of an equivariant filter bank depends on the types of the representations and p that it intertwines. Our discussion will make use of a number of important elementary results from group representation theory which are stated but not proved. The reader wishing to go deeper may consult chapters 1 and 2 of the excellent book by Serre (1977).\nRecall that a group representation is a set of invertible linear maps p(g) : RK -> RK satisfying p(gh) = p(g)p(h) for all elements g, h E H. It can be shown that any representation is a direct sun (i.e. block_diag plus change of basis) of a number of \"elementary' representations associatec with G. These building blocks are called irreducible representations (or irreps), because they car\nWe have shown how to parameterize filter banks that intertwine r and p, making the output fibers H steerable by p if the input space F is H-steerable by . In this section we show how H-steerability of fibers F! leads to G-steerability of the whole feature space F'. This happens through a natural and important construction known as the induced representation (Mackey, 1952; 1953; 1968; Serre. 1977: Taylor, 1986; Folland, 1995; Kaniuth & Taylor, 2013).\nWe can now calculate the transformation law of the output space. To do so, we apply a translation t and transformation r E H to f E F, yielding (tr)f, and then perform the correlation with I. With a some algebra (Appendix A), we find:\nTo(r 71(r)\nFigure 4: The representation 1 in- duced from the permutation repre- sentation p1 shown in fig. 3. A single fiber is highlighted. It is transported to a new location, and acted on by p1.\nTable 1: The irreducible representations of the roto-reflection group D4. This group is generated by. 90-degree rotations r and mirror reflections m, and has 5 irreps labelled A1, A2, B1, B2, E. Left: decomposition of o (eq. 1) in the space Fo of 33 filters with one channel. This representation turns out to have type (3,0,1,1,2), meaning there are three copies of A1, one copy of B1, one copy of B2, and two copies of the 2D irrep E (A2 does not appear). Right: the representation. matrices of each irrep, for each element of the group D4. The reader may verify that these are valid. representations, and that the characters (traces) are orthogonal..\nthemselves not be block-diagonalized. In other words, if g; are the irreducible representations of H, any representation p of H can be written in block-diagonal form:.\nfor some basis matrix A, and some ik that index the irreps (each irrep may occur 0 or more times)\nEach irreducible representation corresponds to a type of symmetry, as shown in table 1. For example. as can be seen in this table, the representations B1 and B2 represent the 90-degree rotation r as the matrix [-1], so the basis filters for these representations change sign when rotated by r. It should be. noted that in the higher layers l > 0, elementary basis filters can look different because they depend. on the representation i that is being decomposed.\nThe fact that all representations can be decomposed into a direct sum of irreducibles implies that each representation has a basis-independent type: which irreducible representations appear in it, and with what multiplicity. For example, the input representation o (table 1) has type (3, 0,1,1,2). This means that, for instance, no(r) is block-diagonalized as:\nA-1or)A = block_diag([1], [1],[1], [-1],[-1], [0 -1; 1 0] , [0 1; -1 0]) .\nSo the most general way in which we can choose a representation p is to choose multiplicitie. m; > 0 and a basis matrix A. In Section 2.7 we will find that there is an important restriction on thi freedom, which alleviates the need to choose a basis. The choice of multiplicities is then the only hyperparameter, analogous to the choice of the number of channels in an ordinary CNN. Indeed, the multiplicities determine the number of channels: K = , m; dim i."}, {"section_index": "8", "section_name": "2.6 DETERMINING THE TYPE OF THE INDUCED REPRESENTATION", "section_text": "1 Xyi(h)Xy;(h) = 0ij H hEH\nIrrep Basis in Fo r2 r3 r m mr mr2 mr3 e A1 [1] [1] [1] [1] [1] [1] [1] [1] A2 [1] [1] [1] [1] [-1] [-1] [-1] [-1] B1 [1] [-1] [1] [-1] [1] [-1] [1] [-1] B2 [1] [-1] [1] [-1] [-1] [1] [-1] [1] E [1 0] [o -1] -1 0 0 1 [-1 0 [o 1 0 -1 [6 i] [1 0 1 -1 0 0 11 0 0 -1 1 -1 0\nIrrep Basis in Fo e r r2 r3 m mr mr mr A1 [1] [1] [1] [1] [1] [1] [1] [1] A2 [1] [1] [1] [1] [-1] [-1] [-1] [-1] B1 [1] [-1] [1] [-1] [1] [-1] [1] [-1] B2 [1] [-1] [1] [-1] [-1] [1] [-1] [1] [6 i] -1 0 0 1 -1 0 [0 1] 1 E 0 [0 -1 1 0 0 -1 0 0 -1 1 1 0 0 -1 -1 0\nD p(g) = A A\nWhere the block matrix contains (3, 0, 1, 1, 2) copies of the irreps (A1, A2, B1, B2, E), evaluated at r (see column r in table 1). The change of basis matrix A is constructed from the basis filters. shown in table 1 (and the same A block-diagonalizes o(g) for all g).\nBy choosing the type of p, we also determine the type of = Ind p (restricted to H), but what is it?. Explicit formulas exist (Reeder (2014); Serre (1977)) but are rather complicated, so we will present a simple computational procedure that can be used to determine the type of any representation. This. procedure relies on the character Xp(g) = Tr(p(g)) of the representation to be decomposed. The most important fact about characters is that the characters of irreps , $; are orthogonal:.\nXp,Xyi) =(XD;mj4jX = mi\nSo a simple dot product of characters is all we need to determine the type of a representation. As we will see next, the type of the input and output representation of a layer determines the parameter cost of that layer.\nIn section 2.3, we found that a filter bank is equivariant if and only if it lies in the vector space called Hom(, p). It follows that the number of parameters for such a filter bank is equal to the dimensionality of this space, n = dim Hom(, p). This number is known as the intertwining number of and p and plays an important role in the theory of group representations.\nBy linearity and the orthogonality of characters, we find that dim Hom(, p) = , m;m', for. representations ,p of type (m1,...,mj) and (ms,...,m'j), respectively. Thus, as far as the. number of parameters of a steerable convolution layer is concerned, the only choice we have to make for p is its type - a short list of integers mi..\ndim : dim p u = dim Hom(, p)\nThe numerator equals s2K . K': the number of parameters for a non-equivariant filter bank. The denominator equals the parameter cost of an equivariant filter bank with the same filter size and number of input/output channels. Typical values of in effective architectures are around H|, e.g = 8 for H = D4. Such a layer utilizes its parameters 8 times more intensively than an ordinary convolution layer.\nIn the previous section we showed that only the basis-independent types of and p play a role in determining the parameter cost of an equivariant filter bank. An equivalent representation p'(g) =-- Ap(g)A-1 will have the same type, and hence the same parameter cost as p. However, when it. comes to nonlinearities, different bases behave differently..\nSince commutation with nonlinearities depends on the basis, we need a more granular notion than the. feature type. We define a p-capsule as a (typically low-dimensional) feature vector that transform according to a representation p (we may also refer to p as the capsule). Thus, while a capsule has. a type, not all representations of that type are equivalent as capsules. Given a catalogue of capsule. p' (for i = 1, ..., C) with multiplicities m, we can construct a fiber as a stack of capsules that is. steerable by a block-diagonal representation p with m; copies of p' on the diagonal..\nLike the capsules of Hinton et al. (2011), our capsules encode the pose of a pattern in the input, and consist of a number of units (dimensions) that do not get mixed with the units of other capsules by symmetries. In this sense, a stack of capsules is disentangled (Cohen & Welling, 2014).\nFurthermore, since the trace of a direct sum equals the sum of the traces (i.e. Xpp' = Xp + Xp') and every representation p is a direct sum of irreps, it follows that we can obtain the multiplicity of Irrep o; in p by computing the inner product with the i-th character:\nSteerable CNNs use parameters much more efficiently than ordinary CNNs. In this section we show how the number of parameters required by an equivariant layer is determined by the feature types of the input and output space, and how the efficiency of a choice of feature types may be evaluated.\nAs with multiplicities, the intertwining number is easily computed using characters. It can be shown (Reeder, 2014) that the intertwining number equals:\nThe efficiency of a choice of type can be assessed using uantity we call the parameter utilization:\nJust like a convolution layer (eq. 3), a layer of nonlinearities must commute with the group action An elementwise nonlinearity v : R -> R (or more generally, a fiber-wise nonlinearity v : RK ->- RK') is admissible for an input representation p if there exists an output representation p' such that v applied after p equals p' applied after v..\nWe have found a few simple types of capsules and corresponding admissible nonlinearities. It is easy to see that any nonlinearity is admissible for p when the latter is realized by permutation matrices: permuting a list of coordinates and then applying a nonlinearity is the same as apply- ing the nonlinearity and then permuting. If p is realized by a signed permutation matrix, then CReLU(a) = (ReLU(a), ReLU(-a)) introduced by Shang et al. (2016), or any concatenated non- linearity v'(a) = (v(a),v(-a)), will be admissible. Any scale-free concatenated nonlinearity such as CReLU is admissible for a representation realized by monomial matrices (having the same nonzero pattern as a permutation matrix). Finally, we can always make a representation of a finite group orthogonal by a suitable choice of basis, which means that we can use any nonlinearity that acts only on the length of the vector.\nFor many groups, the irreps can be realized using signed permutation matrices, so we can use ir. reducible -capsules with concatenated nonlinearities such as CReLU. Another class of capsules. which we call quotient capsules, are naturally realized by permutation matrices, and are thus com patible with any nonlinearity. These are described in Appendix C.."}, {"section_index": "9", "section_name": "2.8 COMPUTATIONAL EFFICIENCY", "section_text": "Modern convolutional networks often use on the order of hundreds of channels K per layer Zagoruyko & Komodakis (2016). When using 3 3 filters, a filter bank can have on the order of 9K2 ~ 106 dimensions. The number of parameters for an equivariant filter bank is about ~ 10. times smaller, but a basis for the space of equivariant filter banks would still be about 106 105. which is too large to be practical.\nIn practice, we typically use many copies of the same capsule (say n; copies of p' and m; copies of ). Therefore, many of the blocks hi; can be constructed using the same intertwiner basis. If we order equivalent capsules to be adjacent, the intertwiner consists of \"blocks of blocks'. Each superblock H; has shape n dim p' m; dim , and consists of subblocks of shape dim p' dim t.\nThe computation graph for an equivariant convolution layer is constructed as follows. Given a catalogue of capsules p' and corresponding post-activation capsules Act p', we compute the in duced representations ' = Ind Act, p' and the bases for Hom(p', 3) in an offline step. The bases are stored as matrices y of shape dimp' . dim ? dim Homh(p', ). Then, giver a list of input / output multiplicities ni, m; for the capsules, a parameter matrix O of shape dim Hom(p', ) n;m, is instantiated. The superblocks H, are obtained by a matrix multi plication yij Oij plus reshaping to shape dim p' . dim ? n;m;. Once all superblocks are filled in the matrix is reshaped from K' Ks2 to K' K s s and convolved with the input."}, {"section_index": "10", "section_name": "2.9 USING STEERABLE CNNS IN PRACTICE", "section_text": "A full understanding of the theory of steerable CNNs requires some knowledge of group represen. tation theory, but using steerable CNN technology is not much harder than using ordinary CNNs.. Instead of choosing a number of channels for a given layer, one chooses a list of multiplicities m.. for each capsule in a library of capsules provided by the developer. To preserve equivariance, the ac-. tivation function applied to a capsule must be chosen from a list of admissible nonlinearities for that. capsule (which sometimes includes all nonlinearities). Finally, one must respect the type system and. only add identical capsules (e.g. in ResNets). These constraints can all be checked automatically.\nFortunately, the block-diagonal structure of and p induces a block structure in I. Suppose = block_diag(1, ) and p = block_diag(p1,... pQ). Then an intertwiner is a matrix. of shape K' Ks2, where K' = . dim p' and K s2 = , dim '. This matrix has the following. block structure:\nhu E Homh(o h1p E Homh = hR1 E Homh. hRp E HomH\nEach block h; corresponds to an input-output pair of capsules, and can be parameterized by a linear combination of basis matrices E Hom H (p', ). P k"}, {"section_index": "11", "section_name": "3 RELATED WORK", "section_text": "Steerable filters were first studied for applications in signal processing and low-level vision (Freemar. & Adelson, 1991; Greenspan et al., 1994; Simoncelli & Freeman, 1995). More or less explicit con nections between steerability and group representation theory have been observed by Lenz (1989) Koenderink & Van Doorn (1990); Teo (1998); Krajsek & Mester (2007). As we have tried to demon strate in this paper, representation theory is indeed the natural mathematical framework in which to. study steerability.\nIn machine learning, equivariant kernels were studied by Reisert (2008); Skibbe (2013). In the con. ext of neural networks, various authors have studied equivariant representations. Capsules wer introduced in Hinton et al. (2011), and significantly improved by Tieleman (2014). A theoretica account of equivariant representation learning in the brain is given by Anselmi et al. (2014). Grou equivariant scattering networks were defined and studied by Mallat (2012) for compact groups. and by Sifre & Mallat (2013); Oyallon & Mallat (2015) for the roto-translation group. Jacobse. et al. (2016) describe a network that uses a fixed set of (possibly steerable) basis filters with learne weights. Lenc & Vedaldi (2015) showed empirically that convolutional networks tend to learn equiv ariant representations, which suggests that equivariance could be a good inductive bias..\nInvariant and equivariant CNNs have been studied by Gens & Domingos (2014); Kanazawa et al. (2014); Dieleman et al. (2015; 2016); Cohen & Welling (2016); Marcos et al. (2016). All of these models, as well as scattering networks, implicitly use the regular representation: feature maps are (often implicitly) conceived of as functions on G, and the action of G on the space of functions on G is known as the regular representation (Serre (1977), Appendix B). Our work is the first to consider other kinds of equivariance in the context of CNNs..\nThe idea of adding a type system to neural networks has been explored by Olah (2015); Balduzzi & Ghifary (2016). We have shown that a type system emerges naturally from the decomposition of a linear representation of a mathematical structure (a group, in our case) associated with the representation learned by a neural network."}, {"section_index": "12", "section_name": "4 EXPERIMENTS", "section_text": "We implemented steerable CNNs in Chainer (Tokui et al., 2015) and performed experiments on the CIFAR10 dataset (Krizhevsky, 20o9) to determine if steerability is a useful inductive bias, and t determine the relative merits of the various types of capsules. In order to run experiments faster, anc to see how steerable CNNs perform in the small-data regime, we used only 2000 training samples for our initial experiments.\nAs a baseline, we used the competitive wide residual networks (ResNets) architecture (He et al.. 2016a;b; Zagoruyko & Komodakis, 2016). We tuned the capacity of this network for the reducec. dataset size and settled on a 20 layer architecture (three residual blocks per stage, with two layers. each, for three stages with feature maps of size 32 32, 16 16 and 8 8, various widths). We compared the baseline architecture to various kinds of steerable CNN, obtained by replacing the convolution layers by steerable convolution layers. To make sure that differences in performance were not simply due to underfitting or overfitting, we tuned the width (number of channels, K. using a validation set. The rest of the training procedure is identical to Cohen & Welling (2016), anc. is fixed for all of our experiments..\nWe first tested steerable CNNs that consist entirely of a single kind of capsule. We found that. architectures with only one type do not perform very well (roughly 30-40% error, vs. 30% for plair ResNets trained on 2k samples from CIFAR1O), except for those that use the regular representation capsule (Appendix C), which outperforms standard CNNs (26.75% error). This is not too surprising,. because many capsules are quite restrictive in the spatial patterns they can express. The strong. performance of regular capsules is consistent with the results of Cohen & Welling (2016), and can. be explained by the fact that the regular representation contains all other (irreducible and quotient representations as subrepresentations, and can therefore learn arbitrary spatial patterns..\nWe then created networks that use a mix of the more successful kinds of capsules. After a few preliminary experiments, we settled on a residual network that uses one mix of capsules for the input and output layer of a residual block, and another for the intermediate layer. The first representation\nNet Depth Width #Params #Labels Dataset Test error Ladder 10 96 4k C10ss 20.4 steer 14 (280, 112) 4.4M 4k C10 23.66 steer 20 (160, 64) 2.2M 4k C10 24.56 14 (280, 112) 4.4M 4k steer C10+ 16.44 steer 20 (160, 64) 2.2M 4k C10+ 16.42 ResNet 1001 16 10.2M 50k C10+ 4.62 Wide 28 160 36.5M 50k C10+ 4.17 Dense 100 2400 27.2M 50k C10+ 3.74 steer 26 (280, 112) 9.1M 50k C10+ 3.74 steer 20 (440, 176) 16.7M 50k C10+ 3.95 steer 14 (400, 160) 9.1M 50k C10+ 3.65 ResNet 1001 16 10.2M 50k C100+ 22.71 Wide 28 160 36.5M 50k C100+ 20.50 Dense 100 2400 27.2M 50k C100+ 19.25 steer 20 (280, 112) 6.9M 50k C100+ 19.84 steer 14 (400, 160) 9.1M 50k C100+ 18.82\nTable 2: Comparison of results of steerable CNNs vs. previous state of the art methods. A plus (+) indicates modest data augmentation (shifts and flips). Width for steerable CNNs is reported as a pair of numbers, one for the input / output layer of a ResNet block, and one for the intermediate layer\nWhen tested on CIFAR10 with 4k labels (table 2), the method comes close to the state of the art in semi-supervised methods, that use additional unlabelled data (Rasmus et al., 2015), and better than transfer learning approaches such as DCGAN which achieves 26.2% error (Radford et al.,. 2015). When tested on the full CIFAR10 and CIFAR100 dataset, the steerable CNN substantially outperforms the ResNet (He et al., 2016b) baseline and achieves state of the art results (improving. over wide and dense nets (Zagoruyko & Komodakis, 2016; Huang et al., 2016)).\nWe have presented a theoretical framework for understanding steerable representations in convolu tional networks. and have shown that steerability is a useful inductive bias that can improve mode accuracy, particularly when little data is available. Our experiments show that a simple steerable architecture achieves state of the art results on CIFAR10 and CIFAR100, outperforming recent ar chitectures such as wide and dense residual networks\nThe mathematical connection between representation learning and representation theory that we have established improves our understanding of the inner workings of (equivariant) convolutional networks, revealing the humble CNN as an elegant geometrical computation engine. We expect thai this new tool (representation theory), developed over more than a century by mathematicians and physicists, will greatly benefit future investigations in this area.\nAnother direction for future work is learning the feature types, which may be easier in the continuous setting because (for non-compact groups) the irreps live in a continuous space where optimizatior may be possible. Beyond classification, steerable CNNs are likely to be useful in geometrical tasks such as action recognition, pose and motion estimation, and continuous control tasks.\nconsists of quotient capsules: regular, qm, qmr2, qmr3 (see Appendix C) followed by ReLUs. The second consists of irreducible capsules: A1, A2, B1, B2, E(2x) followed by CReLUs. On CIFAR10 with 2k labels, this architecture works better than standard ResNets and regular capsules at 24.48% error.\nFor concreteness, we have used the group of flips and rotations by multiples of 90 degrees as a running example throughout this paper. This group already has some nontrivial characteristics (such as non-commutativity), but it is still small and discrete. The theory of steerable CNNs, however. readily extends to the continuous setting. Evaluating steerable CNNs for large, continuous and high-dimensional groups is an important piece of future work.\nWe kindly thank Kenta Oono, Shuang Wu, Thomas Kipf and the anonymous reviewers for thei feedback and suggestions. This research was supported by Facebook, Google and NwO (grant. number NAI.14.108).\nF. Anselmi, J. Z. Leibo, L. Rosasco, J. Mutch, A. Tacchetti, and T. Poggio. Unsupervised learning of invariant representations with low sample complexity: the magic of sensory cortex or a new framework for machine learning? Technical Report 001, MIT Center for Brains, Minds and Machines, 2014.. D. Balduzzi and M. Ghifary. Strongly-Typed Recurrent Neural Networks. Proceedings of the 33rd International. Conference on Machine Learning. 33. 2016.\nlearning? Technical Report 001, MIT Center for Brains, Minds and Machines, 2014. D. Balduzzi and M. Ghifary. Strongly-Typed Recurrent Neural Networks. Proceedings of the 33rd Internation Conference on Machine Learning, 33, 2016. T. Cohen. Learning Transformation Groups and their Invariants, 2013. T. Cohen and M. Welling. Learning the Irreducible Representations of Commutative Lie Groups. In Pr ceedings of the 31st International Conference on Machine Learning (ICML), volume 31, pp. 1755-176 2014. T. S. Cohen and M. Welling. Group equivariant convolutional networks. In Proceedings of The 33rd Intern tional Conference on Machine Learning (ICML), volume 48, pp. 2990-2999, 2016. S. Dieleman, K. W. Willett, and J. Dambre. Rotation-invariant convolutional neural networks for galaxy mc phology prediction. Monthly Notices of the Royal Astronomical Society, 450(2), 2015. S. Dieleman, J. De Fauw, and K. Kavukcuoglu. Exploiting Cyclic Symmetry in Convolutional Neural Ne works. In International Conference on Machine Learning (ICML), 2016. G. B. Folland. A Course in Abstract Harmonic Analysis. CRC Press, 1995. W. T. Freeman and E. H. Adelson. The design and use of steerable filters. Pattern Analysis and Machi Intelligence, IEEE Transactions on, 13(9):891-906, sep 1991. R. Gens and P. Domingos. Deep Symmetry Networks. In Advances in Neural Information Processing Systen (NIPS), 2014. H. Greenspan, S. Belongie, R. Goodman, and P. Perona. Overcomplete Steerable Pyramid Filters and Rotatic Invariance. Proceedings of the Computer Vision and Pattern Recognition (CVPR), 1994 K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image Recognition. In IEEE Conference c Computer Vision and Pattern Recognition (CVPR), 2016a. K. He, X. Zhang, S. Ren, and J. Sun. Identity Mappings in Deep Residual Networks. In European Conferenc on Computer Vision (ECCV), 2016b. G. E. Hinton, A. Krizhevsky, and S. D. Wang. Transforming auto-encoders. ICANN-11: International Confe ence on Artificial Neural Networks, Helsinki, 2011. G. Huang, Z. Liu, and K. Q. Weinberger. Densely Connected Convolutional Networks. 2016. URL http //arxiv.0rg/abs/1608.06993. J.-H. Jacobsen, J. van Gemert, Z. Lou, and A. W. Smeulders. Structured Receptive Fields in CNNs. In Comput Vision and Pattern Recognition (CVPR), 2016. K. Kanatani. Group-Theoretical Methods in Image Understanding. Springer-Verlag New York, Inc., Secaucu NJ, USA, 1990. ISBN 9783642852152. A. Kanazawa, A. Sharma, and D. Jacobs. Locally Scale-invariant Convolutional Neural Network. Deep Lear ing and Representation Learning Workshop: NIPS, pp. 1-11, 2014. E. Kaniuth and K. F. Taylor. Induced Representations of Locally Compact Groups. Cambridge Universi Press, 2013. ISBN 9780521762267. J. J. Koenderink and a. J. Van Doorn. Receptive field families. Biological Cybernetics, 63(4):291-297, 199 ISSN 03401200. doi: 10.1007/BF00203452.\nT. Cohen. Learning Transformation Groups and their Invariants, 2013\nT. Cohen and M. Welling. Learning the Irreducible Representations of Commutative Lie Groups. In Pro-. ceedings of the 31st International Conference on Machine Learning (ICML), volume 31, pp. 1755-1763, 2014. T. S. Cohen and M. Welling. Group equivariant convolutional networks. In Proceedings of The 33rd Interna-. tional Conference on Machine Learning (ICML), volume 48, pp. 2990-2999, 2016. S. Dieleman, K. W. Willett, and J. Dambre. Rotation-invariant convolutional neural networks for galaxy mor- phology prediction. Monthly Notices of the Royal Astronomical Society, 450(2), 2015.. S. Dieleman, J. De Fauw, and K. Kavukcuoglu. Exploiting Cyclic Symmetry in Convolutional Neural Net- works. In International Conference on Machine Learning (ICML), 2016.. G. B. Folland. A Course in Abstract Harmonic Analysis. CRC Press, 1995.. W. T. Freeman and E. H. Adelson. The design and use of steerable filters. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 13(9):891-906, sep 1991.\nJ.-P. Serre. Linear Representations of Finite Groups. Springer, 1977\nM. E. Taylor. Noncommutative Harmonic Analysis. American Mathematical Society, 1986. ISBN 0821815237.\nP. C.-S. Teo. Theory and Applications of Steerable Functions. PhD thesis, Stanford University, 1998\nT. Tieleman. Optimizing Neural Networks that Generate Images. PhD thesis, 2014\nA. Krizhevsky. Learning Multiple Layers of Features from Tiny Images. Technical report, University of. Toronto, 2009. K. Lenc and A. Vedaldi. Understanding image representations by measuring their equivariance and equivalence.. In Proceedings of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2O15.. R. Lenz. Group-theoretical model of feature extraction. Journal of the Optical Society of America A (Optics. and Image Science), 6(6):827-834, 1989. G. W. Mackey. Induced Representations of Locally Compact Groups I. Annals of Mathematics, 55(1):101-139, 1952. G. W. Mackey. Induced Representations of Locally Compact Groups II. The Frobenius Reciprocity Theorem. Annals of Mathematics, 58(2):193-221, 1953. G. W. Mackey. Induced Representations of Groups and Quantum Mechanics. W.A. Benjamin Inc., New York-. Amsterdam, 1968. S. Mallat. Group Invariant Scattering. Communications in Pure and Applied Mathematics, 65(10):1331-1398, 2012. D. Marcos, M. Volpi, and D. Tuia. Learning rotation invariant convolutional filters for texture classification pp. 6,2016. URL http://arxiv.0rg/abs/1604.06720. C. Olah. Neural Networks, Types, and Functional Programming, 2015. URL https: / /colah. github.. io/posts/2015-09-NN-Types-FP/. E. Oyallon and S. Mallat. Deep Roto-Translation Scattering for Object Classification. In IEEE Conference on. Computer Vision and Pattern Recognition (CVPR), pp. 2865--2873, 2015. A. Radford, L. Metz, and S. Chintala. Unsupervised Representation Learning with Deep Convolutional Gener- ative Adversarial Networks. arXiv, pp. 1-15, 2015. ISSN 0004-6361. doi: 10.1051/0004-6361/201527329. URLhttp://arxiv.0rg/abs/1511.06434. A. Rasmus, H. Valpola, M. Honkala, M. Berglund, and T. Raiko. Semi-supervised learning with Ladder Net- works. In Neural Information Processing Systems (NIPS), 2015.. M.Reeder. Notes on representations of finite groups, 2014.. URL https://www2.bc.edu/ {~} reederma/RepThy.pdf.. M. Reisert. Group Integration Techniques in Pattern Analysis: A Kernel View. PhD thesis, Albert-Ludwigs-. University, 2008. DSe 1077\nW. Shang, K. Sohn, D. Almeida, and H. Lee. Understanding and Improving Convolutional Neural Networks via. Concatenated Rectified Linear Units. In International Conference on Machine Learning (ICML), volume 48,. 2016. . Sifre and S. Mallat. Rotation, Scaling and Deformation Invariant Scattering for Texture Discrimination IEEE conference on Computer Vision and Pattern Recognition (CVPR), 2013.. E. Simoncelli and W. Freeman. The steerable pyramid: a flexible architecture for multi-scale derivative com-. putation. Proceedings of the International Conference on Image Processing, 3:444 447, 1995.ISSN. 0818673109. doi: 10.1109/ICIP.1995.537667. H. Skibbe. Spherical Tensor Algebra for Biomedical Image Analysis. PhD thesis, Albert-Ludwigs-Universitat.\nH. Weyl. The classical groups: their invariants and representations. Princeton University Press, 1939"}, {"section_index": "13", "section_name": "APPENDIX A: INDUCTION", "section_text": "In this section we will show that a stack of feature maps produced by convolution with an H. equivariant filter bank transforms according to the induced representation. That is, we will derive eq. 5, repeated here for convenience:.\n[ +[(tr)f]](x) = Pl+1(r) [[+ f]((tr)-1x)\nIn the main text, we mentioned that x E Z2 can be interpreted as a point or as a translation. Here we make this difference explicit, by writing x E Z2 for a point and x E G for a translation. (The operation - defines a section of the projection map G -> Z2 that forgets the non-translational part of the transformation (Kaniuth & Taylor, 2013)).\n* f](x)=(x-)J\n[1 x x = 0 1\nTo keep notation uncluttered, we will write = and p = Pi+1. In full detail, the derivation of the transformation law for the feature space induced by p proceeds as follows:.\nThe last line is the result shown in the pa aper. The justification of each step is.\nI. Definition of * 2. is a homomorphism / group representation\nAlthough the induced representation can be described in a more general setting, we will use an explicit matrix representation of G to make it easier to check our computations. A general element of G is written as:\n1 T R 0 R T 0 1 0 1 0 1\nFinally, we will distinguish the action of G on itself, written gh for g,h E G (implemented as matrix-matrix multiplication) and its action on Z2, written g : x for g E G and x E Z2 (implemented as matrix-vector multiplication by adding a homogeneous coordinate to x)..\n[+ [(tr)f]](x) = (x-)(tr)f =V(x-1tr)f -(rr-'x-'tr)f (r)(r-'x-'tr)f =p(r)r(r-1x-1tr).f p(r)((r-1t-1xr)-1 =p(r)VT -p(r)[+ f]((tr)-1.\n3. rr-1 is the identity, so can always multiply by it 4. is a homomorphism / group representation 5. E Hom(, p) is equivariant to r E H. 6.Invert twice. 7. (tr)-1 . x = r-1t-1xr can be checked by multiplying the matrice 8. Definition of\nThe derivation above is somewhat involved and messy, so the reader may prefer to think geometri cally (using the figures in the paper) instead of algebraically. This complexity is an artifact of the lack of abstraction in our presentation. The induced representation is really a very natural object to consider (abstractly, it is the \"adjoint functor' to the restriction functor. A more abstract treatment of the induced representation can be found in Serre (1977); Mackey (1952); Reeder (2014). A treat- ment that is close to our own, but more general is the \"alternate description' found on page 49 of Kaniuth & Taylor (2013)."}, {"section_index": "14", "section_name": "APPENDIX B: RELATION TO GROUP EOUIVARIANT CNNS", "section_text": "This defines a linear representation of G known as the regular representation. It is easy to see that the regular representation is naturally realized by permutation matrices. Furthermore, it is known that the regular representation of G is induced by the regular representation of H. The latter is defined in Appendix C, and is what we refer to as \"regular capsules\"' in the paper.."}, {"section_index": "15", "section_name": "APPENDIX C: REGULAR AND OUOTIENT FEATURES", "section_text": "p(a)f(bK) = f(a-1.bK)\nThe function f attaches a value to every coset. The H-action permutes these values, because it permutes the cosets. Hence, p can be realized by permutation matrices. For small groups the explicit computations can easily be done by hand, while for large groups this task can be automated\nIn this way, we get one permutation representation for each subgroup K of H. In particular, for the. subgroup K = {e} (the trivial subgroup containing only the identity e), we have H/K ~ H. The representation in the space of functions on H is known as the \"regular representation'\". Using such regular representations in a steerable CNN is equivalent to using the group convolutions introduced. in Cohen & Welling (2016), so steerable CNNs are a strict generalization of G-CNNs. At the other. extreme, we take K = H, which gives the quotient H/K {e}, the trivial group, which gives the. trivial representation A1.\nFor the roto-reflection group H = D4, we have the following subgroups and associated quotien features\nIn this section we show that the recently introduced Group Equivariant Convolutional Networks (G. CNNs, Cohen & Welling (2016)) are a special kind of steerable CNN. Specifically, a G-CNN is a steerable CNN with regular capsules.\nIn a G-CNN, the feature maps (except those of the input) are thought of as functions f : G -> RK instead of functions on the plane f : Z2 -> RK, as we do here. It is shown that the feature maps ransform according to\nn(g)f(h) = f(g--h)\nLet H be a finite group. A subgroup of H is a subset that is also itself a group (i.e. closed under composition and inverses). The (left) coset of a subgroup K in H are the sets hK = {hk[k E K} The cosets are disjoint and jointly cover the whole group H (i.e. they partition H). The set of all cosets of K in H is denoted H/K, and is also called the quotient of H by K.\nThis action translates into an action on the space of functions on H/K. Let Q denote the space of functions f : H/K -> R. Then we have the following representation of H:.\n{e, m} {e,mr} {e, mr2} {e,mr3} {e,r2} {e,r,r2, e, r2, m, mr , r2, mr, mr3 H"}] |
S1JG13oee | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Recently, f-GAN, which minimizes the variational estimate of f-divergence, has been propose (Nowozin et al.]2016). The original GAN is a special case of f-GAN.\nIn this study, we propose a novel algorithm inspired by GANs from the perspective of density ratio estimation based on the Bregman divergence, which we refer to as b-GAN. The proposed algorithm iterates density ratio estimation and f-divergence minimization based on the obtained density ratio This study make the following two primary contributions:\nTable 1: Relation among GAN. f-GAN and b-GAN\nName D-step (updating 0D) G-step (updating 0G) GAN Estimate Adversarial update p+q f-GAN Estimate f'(% Minimize a part of variational. when f = x log x - (x + 1) log(x + 1), it is a GAN Estimate of f-divergence b-GAN Estimate = r(x) mine Ex~q(x;0)[f(r(x))] (this work) Dual relation with f-GAN. Minimize f-divergence directly."}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "There have been many recent studies about deep generative models. Generative adversarial networks. (GAN) (Goodfellow et al.J|2014) is the variant of these models that has attracted the most attention. It has been demonstrated that generating vivid, realistic images from a uniform distribution is possible (Radford et al.]2015) Denton et al.]2015). GANs are formulated as a two-player minimax game However, the objective function derived in the original motivation is modified to obtain stronger gradients when learning the generator. GANs have been applied in various studies; however, few. studies have attempted to reveal their mechanism (Goodfellow2014) Huszar2015).\n1. We derive a novel unified algorithm that employs well-studied results regarding density ratio estimation (Kanamori et al.2012] Sugiyama et al.2012]Menon & Ong2016). 2. In the original GANs, the value function derived from the two-player minimax game does. not match the objective function that is actually used for learning the generative model. In. our algorithm, the objective function derived from the original motivation is not changed for learning the generative model.\nMinimize a part of variational N Estimate of f-divergence. mine Ex~q(x;e)[f(r(x))] Minimize f-divergence directly\nThe remainder of this study is organized as follows. Section 2 describes related work. Section 3. introduces and analyzes the proposed algorithm in detail. Section 4 explains the proposed algorithm. for specific cases. Section 5 reports experimental results. Section 6 summarizes our findings and discusses future work.\nIn this study, we denote an input space as X and a hidden space as Z. Let p(x) be the distribution o training data over X and q(x) be the generated distribution over X..\nGANs (Goodfellow et al.2014) were developed based on a game theory scenario, where two mode i.e., a generator network and a discriminator network, are simultaneously trained. The generato network Geg (z) produces samples with a probability density function of q(x; 0G). The discriminato network Tep (x) attempts to distinguish the samples from the training samples and that from the generator. GANs are described as a zero-sum game, where the function v(G, T) determines the pay-off of the discriminator and the function -v(G, T) determines the pay-off of the generatoi The discriminator Tep(x) and generator Geg(z) play the following two-player minimax game mineg maxep v(G, T), where v(G, T) can be expressed as follows:\nEx~p(x)[logTep(x)] + Ex~q(x;0g)[log(1- Tep(x))]\nf-GAN (Nowozin et al.] 2016) generalizes the GAN concept. First, we introduce f-divergence (Ali 8 Silvey1966). The f-divergence measures the difference between two probability distributions p anc q and is defined as\np(x) Df(p||q) = x dx q(x)f (r(x)) dx 1r\nwhere f(x) is a convex function satisfying f(1) = 0. Note that in the space of positive measures i.e., not satisfying normalized conditions, f-divergence must satisfy f'(1) = 0 due to its invariance [Amari & Cichoki]2010)\nThe function v(G, T) of f-GAN is given by\nEx~p(x)[Top(x)]- Ex~q(x;0g)[f* (Tep(x))]\nwhere f* is a Fenchel conjugate of f (Nguyen et al. 2010). In Eq.2] v(G, T) comes from\np(x) q(x) f dx = sup Ex~p[T(x)]- Ex~q[f*(T(x)) T\nThe discriminator and generator are iteratively trained by turns. For fixed G, the optimal T(x). p(x) 1iS This suggests that training the discriminator can be formulated as a density ra. tio estimation. The generator is trained to minimize v(G, T) adversarially. In fact, maximizing Ex~q(x;0c) [log Tep (x)] is preferable instead of minimizing Ex~q(x;0G)[log(1 TD(x))]. Although this does not match the theoretical motivation, this heuristic is the key to successful learning. We. analyze this heuristic in Section 3.4..\nFollowing GANs, 0p is trained to maximize Eq.2|in order to estimate the f-divergence. In contrast. 0g is trained to adversarially minimize Eq. 2|to minimize the f-divergence estimate. However, as in GANs, maximizing Ex~q[T(x)] is used rather than minimizing Ex~q[-f*(T(x))]. The latter optimization is theoretically valid in their formulation; however, they used the former heuristically Similar to GANs, f-GAN also formulates the training discriminator as a density ratio estimation. For a fixed G, the optimal T(x) is f'(P), where f' denotes the first-order derivative of f. When f(x) is\nx log x - (x + 1) log(1 + x), f-GANs are equivalent to GANs. Table 1 summarizes GAN and f-GAN We denote the step for updating 0p as D-step and the step for updating 0g as G-step."}, {"section_index": "2", "section_name": "3 METHOD", "section_text": "As described in Section 2, training the discriminators in the D-step of GANs and f-GANs is regarded as density ratio estimation. In this section, we further extend this idea. We first review the density ratio estimation method based on the Bregman divergence. Then, we explain and analyze a novel proposed b-GAN algorithm. See appendix E for recent research related to density ratio estimation.\nThere have been many studies on direct density ratio estimation, where a density ratio model is fitte to a true density ratio model under the Bregman divergence (Sugiyama et al.] 2012). We briefly review this method.\nAssume there are two distributions p(x) and q(x). Our aim is to directly estimate the true density q(x) model. The integration of the Bregman divergence B f [r(x) |[re(x)] between the density ratio model. and the true density ratio with respect to measure q(x)dx is.\nBDf(r||re) B f[r(x)||re(x)|q(x)dx -frex))-f'rex))(r(x)-re(x)) q(x)dx\nWe define the terms related to re in BD t(r[[re) as\nBR )re(x)- f(re(x))) q(x)dx (x ) d )re(x)q(x)-p(x)) dx- Df(qreq)\nThus, estimating the density ratio problem turns out to be the minimization of Eq. 5|with respect to"}, {"section_index": "3", "section_name": "3.2 MOTIVATION", "section_text": "In this section, we introduce important propositions required to derive b-GAN. Proofs of propositions are given in Appendix C. The following proposition suggests that the supremum of the negative of Eq.5|is equal to the f-divergence between p(x) and q(x).\nProp 3.1. The following equation holds.\n= supEx~p[f' (re(x))]- Ex~q[(f'(re(x))re(x)- f(re(x)))] re\nIt has been shown that the supremum of negative of Eq.5|is equivalent to the supremum of Eq. 2 Interestingly, the negative of Eq. 5|has a dual relation with the objective function of f-GAN, i.e., Eq 2.\nProp 3.2. Introducing dual coordinates Tep = f'(re)(Amari & Cichoki)2010) yields the right sic of Eq.5|from Eq. 2.\nProp 3.2 shows that the D-step of f-GAN can be regarded as the density ratio estimation because Eq 5|expresses the density ratio estimation and Eq. 2 is a value function of f-GAN.\nratio r(x without estimating. Let re(x) be a density ratio a(x\nOur objective is to minimize the f-divergence between the distribution of the training data p(x) and the generated distribution q(x). We introduce two functions constructed using neural networks. rep(x) : X -> R parameterized by 0p, and Geg(z) : Z -> X parameterized by 0g. Measure. q(x; 0g)dx is a probability measure induced from the uniform distribution by Geg(z). In this case,. rop(x) is regarded as a density ratio estimation network and Geg(z) is regarded as a generator. network for minimizing the f-divergence between p(x) and q(x)..\nMotivated by Section 3.2, we construct a b-GAN using the following two steps\nThe b-GAN algorithm is summarized in Algorithm 1, where B is the batch size. In this study, single-step gradient method (Goodfellow et al.. 2014 Nowozin et al.2016) is adopted"}, {"section_index": "4", "section_name": "3.4 ANALYSIS", "section_text": "Following Goodfellow et al. [2014], we explain the validity of the G-step and D-step. We then explain the meaning of b-GAN. Finally, we analyze differences between b-GAN and f-GAN..\nIn the G-step, we update the generator as minimizing D f(p|q) by replacing p(x) with re(x)q(x). We assume that q(x; 0g) is equivalent to p(x) when 0g = 0*, q(x; 0G) is identifiable, and the optimal r(x) is obtained in the D-step. By our assumption, the acquired value in the G-step is 0 which minimizes the empirical approximation of Df(r(x)q(x; 0G)||q(x; 0G)) = Ex~q(x;og)[r(x)]\n1. Update 0p to estimate the density ratio between p(x) and q(x; 0g). To achieve this, we. minimize Eq.5|with respect to re(x). In this step, the density ratio model re(x) in Eq.5. can be considered as rep (x) in this step.. 2. Update 0g to minimize the f-divergence Df(p||q) between p(x) and q(x; 0G) using the. obtained density-ratio. We are able to suppose that q(x; Og)re(x) is close to p(x). Instead of D f(p||q), we update 0g to minimize the empirical approximation of Df(qre||q).\nB 9t+1 = 0 _V e f'(rep(G(zi)))rep(G(zi))- f(rep(G(zi))) -f'(rep B\nB 1 9t+1 =0tG-VeG f(r(Goc(zi))) i=1\nIn the D-step, the proposed algorithm estimates P toward any divergence; thus, it differs slightly from. the D-step of f-GAN because the estimated values, i.e., f'(), are dependent on the divergences. We. also introduce an f-GAN-like update as follows. As mentioned in Section 2, we have two options in the G step.\n1. D-step: minimize Ex~p(x)[-f(rep(x))] + Ex~q(x)[f'(rep(x))rep(x) - f(rep(x))] w.r.t AD. 2. G-step: minimize Ex~q(x;eg)[-f'(r(x))] or Ex~q(x;0g)[-f'(r(x))r(x) + f(r(x))] w.r.t AG.\nD-step: minimize Ex~p. (x))rep(x)- f(rep(x))] w (rep(xl\nThe density ratio is estimated in the D-step. The estimator of r(x) is an M-estimator and is asymptot ically consistent under the proper normal conditions (Appendix E.1).\nUsually, we cannot perform only a G-step because we do not know the form of p(x) and q(x). In b-GAN, D(p||q) can be minimized by estimating the density ratio r(x) without estimating the. densities directly.\nIn fact, the r(x) obtained at each iteration is different and not optimal because we adopt a single-step gradient method (Nowozin et al.]2016). Thus, b-GAN dynamically updates the generator to minimize the f-divergence between p(x) and q(x). As mentioned previously, f(x) must satisfy f'(1) = 0 in this case because we cannot guarantee that re(x )q(x) is normalized.\nSimilar to GANs, the D-step and G-step work adversarially. In the D-step, re(x) is updated to fit the. ratio between p(x) and q(x). In the G-step, q(x) changes, which means re(x) becomes inaccurate in terms of the density ratio estimator. Next, re(x) is updated in the D-step so that it fits the density ratio of p(x) and the new q(x). This learning situation is derived from Eq. [6, which shows that 0p is updated to increase D f(qre||q) in the D-step. In contrast, 0g is updated to decrease D f(qre||q) in. the G-step.\nIn Section 3.3, we also introduced a f-GAN-like update. Three choices can be considered for the G-step:\n1)Ex~ (r(x))r(x)+ f(r(x)\nNote that f is a convex function, f(1) =0. and f'(1) = 0. It is noted in (Nowozin et al. 2016) that case (2) works better than case (3) in practice. We also confirm this. The complete reason for this is unclear However, we can find a partial reason by differentiating objective functions with respect to r. The derivatives of the objective functions are\n(1) = 0. It is noted in (Nowozin et al.]2016) that. The comparison of G-step objective functions ase (2) works better than case (3) in practice. We also. 1.0 (1) b-GAN: 0.5*(1-r)^2 (2) f-GAN:1-r onfirm this. The complete reason for this is unclear. 0.5 (3 f-GAN:-0.5*rr owever, we can find a partial reason by differentiating. 0.0 jective functions with respect to r. The derivatives 0.5 f the objective functions are. 1.0 1.5 (1)f'(r),(2)- f\"(r),(3)-rf\"(r) 2.0 0.0 0.5 1.0 1.5 2.0\n1)f'(r),(2)- f\"(r),(3)-rf\"(r)\nAll signs are negative when r(x) is less than 1. Usu- Figure 1: The graph of (1), (2), (3) when J. ally, when x is sampled from q(x), r(x) is less than 1. is Pearson divergence. Therefore, we speculate that r(x) is less than 1 during. most of the learning process when x is sampled from. q(x). When r(x) is small, the derivative is also small in (3) because the term r(x) is multipliec. Therefore, the derivative tends to be small in (3). The mechanism pulling r(x) to 1 does not worl when r(x) is small. Thus, the case of (3) does not work well. A similar argument was proposed by. Goodfellow et al.(2014) andNowozin et al.(2016).\nIn our experimental case of (1) and (2) work properly (Section 5). The reason case (2) works is tha. function - f'(r) behaves like an f-divergence and the derivative is large when r(x) is small. However. we cannot guarantee that -f'(r) satisfies the conditions of f-divergence between positive measures. i.e, - f'(r) is a convex function and - f\"(1) = 0. If the derivatives in case (2) are negative when r(x. is greater than 1, there is a possibility that the mechanism pulling r(x) to 1 does not occur. In contrast. in case (1), when r(x) is greater than 1, the derivatives are positive, therefore, the mechanism pulling r(x) to 1 occurs. This prevents generators from emitting the same points. We can expect the same. effects as -minibatch discrimination- (Salimans et al.2016).\nThroughout the analysis, we can easily extend the algorithm of b-GAN by using different divergences in the G-step and D-step. The original GAN can be regarded as one of such algorithms\n(a1) = rlogr-r+1 (a = 1) -logr+r-1 (a =-1)\nThe objective function derived from a-divergence is summarized as follows"}, {"section_index": "5", "section_name": "4.2 HEURISTICS", "section_text": "We describe some heuristic methods that work for our experiments. The heuristics introduced here are justified theoretically in Appendix C..\nIn the initial learning process, empirical distribution p and generated distribution q are completely q(x) and tiny when x is taken from q. It seems that the learning does not succeed in this case. In fact, in our setting, when the final activation function of rep (x) is taken from functions in the range (0, oo) b-GAN does not properly work. Therefore, we use a scaled sigmoid function such as a two-times sigmoid function. A similar idea has also been used in (Cortes et al.]2010)\nTable 2: The summary of objective function\nalpha D-step G-step 1 (KL divergence) Ex~q(x)[rep(x) - 1]- Ex~p(x)[logrep(x)] Ex~q(x;eG)[r(x) logr(x) - r(x) + 1] 3 (Pearson divergence) Ex~q(x)[0.5rep(x)2 -0.5]- Ex~p(x)[rep(x)-1 Ex~q(x;0G)[0.5(r(x) - 1)2 -1 (Reversed KL divergence) Ex~q(x)[logrep(x)] ]- Ex~p(x)[- Ex~q(x;0g)[-log(r(x)) + r(x) - 1] rep(x)\nQ = -1 In this case, a-divergence is a Kullback-Leibler (KL) divergence. Density ratio estimation via the KL divergence corresponds to the Kullback-Leibler Importance Estimation Procedure (Sugiyama et al.|2012). In the G-step of an f-GAN-like update, the objective function is Ex~q(x;9c)[- log r(x)] or Ex~q(x;0c)[1 - r(x)]. Q = 3 Density ratio estimation via the Pearson divergence corresponds to the Least- Squares Importance Fitting (Yamada et al.2011). It is more robust than under KL divergence [Yamada et al.]2011) Dawid et al.]2015). This is because Pearson divergence does not include the log term. Hence the algorithm using Pearson divergence should be more stable. In the G-step of f-GAN-like update, the objective function is Ex~q(x;oc)[1 r(x)] or Ex~q(x;0G)[0.5 0.5r(x)2]. Q = -1 Estimating the density ratio using reversed KL divergence seems to be unstable because reversed KL-divergence is mode seeking and the generated distribution changes at each iteration. However, it is preferable to use reversed KL divergence when generating realistic images (Huszar 2015). In the G-step of an f-GAN-like update, the objective function is Eq(x;0c)[r(aj] or Eq(x;0c)[- log r(x)]."}, {"section_index": "6", "section_name": "5 EXPERIMENTS", "section_text": "We conducted experiments to establish that the proposed algorithm works properly and can success. fully generate natural images. The proposed algorithm is based on density ratio estimation; therefore. knowledge regarding the density ratio estimation can be utilized. In the experiments, using the Pearson divergence and estimating the relative density ratio is shown to be useful for stable learning We also empirically confirm our statement in Section[3.4] i.e., f-divergence is increased when learning. 0p and decreased when learning Og.."}, {"section_index": "7", "section_name": "5.1 SETTINGS", "section_text": "We applied the proposed algorithm to the CIFAR-10 data set (Krizhevsky2009) and Celeb A data set (Liu et al.||2015) because they are often used in GAN research (Salimans et al. 2016 Goodfellov et al.2014). The images size are 32 32 pixels. All results in this section are analyzed based on the results of the CIFAR-10 data set. The results for the Celeb A data set are presented in Appendix B Our network architecture is nearly equivalent to that of previous study (Radford et al.||2015) (refer tc the appendix A,B for details). Note that unless stated otherwise, the last layer function of rep (x) is a sigmoid function multiplied by two. We used the TensorFlow for automatic differentiation (Abad et al.]2015). For stochastic optimization, Adam was adopted (Kingma & Ba]2014).\nFigure 2: Comparative results: estimated density ratio values rep (x) from the training data (red), the estimated density-ratio values rep (x) from the generated distribution (green), generator losses taken in the D-step and G-step (blue). The top, second, and bottom rows show rep(x) and the losses of b-GAN with the Pearson divergence, KL divergence, modified KL divergence (relative density ratio estimation version, = 0.2), and reversed KL divergence, respectively.\nFigure 2 shows the density ratio estimate rep(x) and loss values of the generators. For eacl divergence, we conducted four experiments with 40,O00 epochs, where the initial learning rate valu. was fixed (5 10-5) with the exception of reversed KL divergence. These results show that th b-GANs using Pearson divergence are stable because the learning did not stop. The same results have been reported in the research into density ratio estimation (Yamada et al.]2011). In contrast b-GANs using the KL divergence are unstable. In fact, the learning stopped between the 20,o00tl and 37,0ooth epoch when the learning rate was not as small. When we use a heuristic method, i.e\nb-GAN.pearson divergence:density-ratic b-GAN.pearson divergence:generative loss 20 1.5 0.4 0.3 1.0 0.2 0.5 0.1 0.0 0.0 5000 10000 15000 20000 25000 30000 35000 40000 5000 10000 15000 20000 25000 30000 35000 40000 b-GAN.KL-divergence:density-ratio b-GAN.KL-divergence:generative loss 2.0 1.0 1.5 0.6 10 0.5 0.0 0.0 5000 10000 15000 20000 25000 30000 35000 40000 20000 25000 30000 35000 40000 b-GAN.KL-divergence:relative-density-ratio b-GAN.KL-divergence:generative loss 1.0 1.5 1.0 0.5 0.0 5000 10000 15000 20000 25000 30000 40000 5000 10000 15000 20000 25000 30000 35000 40000 b-GAN.reversed KL-divergence:! density-ratio b-GAN.reversed KL-divergence:generative loss 1.5 1.0 0.5 0.0 5000 10000 15000 20000 25000 30000 5000 10000 15000 20000 25000\nFigure 3: Density ratio value rep(x) and generator losses of b-GAN when the last output function is a sigmoid function multiplied by 5.\nb-GAN.pearson divergence:divergence difference b-GAN.KL divergence:divergence difference 0.30 0.25 0.6 0.20 ). 0.15 0.10 0.05 0.00 0.05 0.0 0.10 5000 10000 15000 20000 5000 10000 15000 20000\nFigure 4: Divergence differences between D-step and G-step: b-GAN with Pearson divergence (left) b-GAN with KL divergence (right)\nestimating the relative density ratio as described in Section 4.2, this problem is solved. For reversec KL divergence, the learning stopped too soon if the initial learning rate value was 5 10-5. If the learning rate was 1 10-6, the learning did not stop; however, it was still unstable.\nIn Figure 2, the last layer activation function of the b-GANs is a twofold sigmoid function. In Figure 3, we use a sigmoid function multiplied by five. The results indicate that the estimated density ratio values approach one. We also confirm that the proposed algorithm works with sigmoid functions at other scales.\nNote that learning is successful with an f-GAN-like update when minimizing Eq[- f'(r)]. However the learning f-GAN-like update when minimizing Eg[f(r) - rf'(r)] did not work well for our network architecture and data set.\nWe have proposed a novel unified algorithm to learn a deep generative model from a density ratic estimation perspective. Our algorithm provides the experimental insights that Pearson divergence anc estimating relative density ratio are useful to improve the stability of GAN learning. Other insight regarding density ratio estimation would also be also useful. GANs are sensitive to data sets, the forn of the network and hyper-parameters. Therefore, providing methods to improve GAN learning is meaningful.\nRelated research to our study that focuses on linking density ratio and a GAN, has been performed by explaining specific algorithms independently (Mohamed & Lakshminarayanan|2016). In contrast, our framework is more unified.\nIn future. the following things should be considered\nb-GAN 5.0*sigmoid:density-ratio b-GAN 5.0*sigmoid:generative loss\nb-GAN.pearson divergence:divergence difference b-GAN.KL divergence:divergence difference. 0.30 0.25 0.20 0.15 0.10 0.05 0.00 0.05 0.10 5000 10000 15000 20000 0.1 5000 10000 15000 20000\nFigure 4 shows the estimated f-divergence D f(qreq) before the G-step subtracted by D f(qreq after the G-step. Most of the values are greater than zero, which suggests f-divergence decreases at every G-step iteration. This observation is consistent with our analysis in Section 3.4.\n.What is the optimal divergence? In research regarding density ratio estimation, the Pearson divergence (a = 3) is considered robust (Nam & Sugiyama2015). We empirically and theoretically confirmed the same property when learning deep generative models. It is also reported that using KL-divergence and reversed KL-divergence is not robust as scoring rules (Dawid et al.|2015). For generating realistic images, the reversed KL divergence ( = 1) is preferred because it is mode seeking (Huszar,2015). However, if a is small, the density ratio estimation becomes inaccurate. For a robust density ratio, using power divergence has also been proposed (Sugiyama et al.2012). The determination of the optimal divergence is a persistent problem (Appendix E)."}, {"section_index": "8", "section_name": "ACKNOWLEDGEMENTS", "section_text": "The authors would like to thank Masanori Misono for technical assistance with the experiments. We are grateful to Masashi Sugiyama, Makoto Yamada, and the members of the Preferred Networks team."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "What should be estimates in D-step? In the D-step of b-GAN, r(x) is estimated. However. in the original GAN, r(x)/(1 + r(x)) is estimated. As unnormalized models, the latter is more robust than estimating r(x) (Pihlaja et al.|2010) (Appendix E.7). We can consider algorithms that use different divergences in the G-step and D-step. In that case, choice of the divergences are more diverse. Original GANs are described in such algorithms as mentioned in Section 3.5. We can consider algorithms that use multiple divergences. This may improve the stability of learning. When sampling from q(x), if the objective is sampling from real data p(x), r(x) should be multiplied. Hence, the density ratio is also useful when using samples from q(x). How to use samples obtained from generators meaningfully is an remaining important problem.\nM. Abadi, A. Agarwal, and P Barham. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URLhttp: //tensorflow. org/ Software available from tensorflow.org. M. Ali and S. Silvey. A general class of coefficients of divergence of one distribution from another.. Journal Royal Statistical Society Series B, pp. 131142, 1966. S. Amari and A. Cichoki. Information geometry of divergence functions. Bull. Polish. Acad. Sci., 58: 183-195.2010. P.L. Bartlett. M. I. Jordan. and J.D. McAuliffe. Convexity. classification and risk bounds. Journal of the American Statistical Association, 101(473):138156, 2006. A. Basu, I. R. Harris, N. L. Hjort, and M. C. Jones. Robust and efficient estimation by minimising a density power divergence. Biometrika, 85:549559, 1998. C. Cortes, Y. Mansour, and M. Mohri. Learning bounds for importance weighing. In Advances in Neural Information Processing System (NIPS), 2010. A. P. Dawid and M. Musio. Theory and applications of proper scoring rules. Metron, 72:169183, 2014. A. P. Dawid, M. Musio, and L. Ventura. Minimum scoring rule inference. Scandinavian Journal of Statistics. 2015. E. Denton, S. Chintala, A. Szlam, and R. Fergus. Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks. In Advances in Neural Information Processing Systems (NIPS), June 2015. G. K. Dziugaite, D. M. Roy, and Z. Ghahramani. Training generative neural networks via maximum mean discrepancy optimization. In Proc. Conf. on Uncertainty in Artificial Intelligence (UAI) 2015. K. Fukumizu, F. Bach, and M. Jordan. Dimensionality reduction for supervised learning with reproducing kernel hilbert spaces. Journal of Machine Learning Research, 5:7399, 2004. I. Goodfellow, M. Pouget-Abadie, J.and Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In In Advances in Neural Information Processing Systems (NIPS), pp. 2672-2680. 2014. I. J. Goodfellow. On distinguishability criteria for estimating generative models. ArXiv e-prints, 2014.. URLhttps://arxiv.org/pdf/1412.6515v4.pdf A. Gretton, K. Borgwardt, M. Rasch, B. Schoelkopf, and A. Smola. A kernel two-sample test. Journal. of Machine Learning Research,, 13:723773, 2012. M. Gutmann and A. Hyvarinen. Noisecontrastive estimation: A new estimation principle for. unnormalized statistical models. In Artificial Intelligence and Statistics Conference (AISTATS), 2010. M.U. Gutmann and J. Hirayama. Bregman divergence as general framework to estimate unnormalized statistical models. In Uncertainty in Artificial Intelligence (UAI 2011), 2011. F. Hayashi. Econometrics. Princeton University Press, 1997. P. J. Huber and E. M. Ronchetti. Robust statistics. John Wiley & Sons, 2009.\nF. Huszar. How (not) to Train your Generative Model: Scheduled Sampling, Likelihood, Adversary? ArXiv e-prints, 2015. URLhttp://arxiv.org/pdf/1511.05101v1.pdf T. Kanamori, T. Suzuki, and M. Sugiyama. Divergence estimation and two-sample homogeneity test under semiparametric density-ratio models. IEEE Trans. Inform. Theory, 58(2):708720, 2012. T. Kanamori, T Suzuki, and M. Sugiyama. Divergence estimation and two-sample homogeneity test under semiparametric density-ratio models. IEEE Transactions on Information Theory, 58:. 708-720, 2012. D. Kingma and J. Ba. Adam: A Method for Stochastic Optimization. In International Conference on Learning Representations (ICLR), 2014. A. Krizhevsky. Learning multiple layers of features from tiny images. 2009.. Y. Li, K. Swersky, and R. Zemel. Generative moment matching networks. In International Conference on Machine Learning (ICML), 2015.. Z. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild. In Proceedings of. International Conference on Computer Vision (ICCV), 2015.. A.K. Menon and C.S. Ong. Linking losses for density ratio and class-probability estimation. In In. International Conference on Machine Learning (ICML), 2016. S. Mohamed and B. Lakshminarayanan. Learning in Implicit Generative Models. ArXiv e-prints,. October 2016. M. Mohri, A. Rostamizadeh, and A. Talwalkar. Foundation of Machine Learning. The MIT press,. 2012. H. Nam and M. Sugiyama. Direct density ratio estimation with convolutional neural networks with application in outlier detection. IE1CE Transactions, 98: 1073-1079, 2015. X. Nguyen, M Wainwright, and M. Jordan. Estimating divergence functionals and the likelihood ratio by convex risk minimization. IEEE Transactions on Information Theory, 2010.. S. Nowozin, B. Cseke, and R. Tomioka. f-GAN: Training Generative Neural Samplers using. Variational Divergence Minimization. ArXiv e-prints, 2016. URL https://arxiv. org/. pdf/1606.00709v1.pdf M. Pihlaja, M.U. Gutmann, and A Hyvarinen. A family of computationally efficient and simple esti- mators for unnormalized statistical models. In Proc. Conf. on Uncertainty in Artificial Intelligence (UA12010), 2010. J. Qin. Inferences for case-control and semiparametric two-sample density ratio models. Biometrika, 85(3):619639, 1998. A. Radford, L. Metz, and S. Chintala. Unsupervised Representation Learning with Deep Convolu- tional Generative Adversarial Networks. In International Conference on Learning Representations. (ICLR), 2015. M.D Reid and R.C Williamson. Composite binary losses. Journal of Machine Learning Research,. 11:23872422. 2010. M.D Reid and R.C Williamson. Information, divergence and risk for binary experiments. Journal of. Machine Learning Research, 12:731817, 2011. T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved Techniques for Training GANs. ArXiv e-prints, June 2016.. I. Sergey and S. Christian. Batch normalization: Accelerating deep network training by reducing. internal covariate shift. 2015. URLhttps://arxiv.org/pdf/1502.03167v3.pdf\napplication in outlier detection. 1EICE Transactions, 98:1073-1079, 2015. X. Nguyen, M Wainwright, and M. Jordan. Estimating divergence functionals and the likelihood ratio by convex risk minimization. IEEE Transactions on Information Theory, 2010. S. Nowozin, B. Cseke, and R. Tomioka. f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization. ArXiv e-prints, 2016. URL https://arxiv.org/ pdf/1606.00709v1.pdf M. Pihlaja, M.U. Gutmann, and A Hyvarinen. A family of computationally efficient and simple esti- mators for unnormalized statistical models. In Proc. Conf. on Uncertainty in Artificial Intelligence (UA12010), 2010. J. Qin. Inferences for case-control and semiparametric two-sample density ratio models. Biometrika. 85(3):619639.1998. A. Radford, L. Metz, and S. Chintala. Unsupervised Representation Learning with Deep Convolu- tional Generative Adversarial Networks. In International Conference on Learning Representations (ICLR), 2015. M.D Reid and R.C Williamson. Composite binary losses. Journal of Machine Learning Research 11:23872422, 2010. M.D Reid and R.C Williamson. Information, divergence and risk for binary experiments. Journal of Machine Learning Research, 12:731817, 2011. T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved Techniques for Training GANs. ArXiv e-prints, June 2016. I. Sergey and S. Christian. Batch normalization: Accelerating deep network training by reducing internal covariate shift. 2015. URLhttps://arxiv.org/pdf/1502.03167v3.pdf I Steinwart. On the influence of the kernel on the consistency of support vector machines. Journal of Machine Learning Research, 2:6793, 2011. M. Sugiyama, T. Suzuki, and T. Kanamori. Density ratio matching under the bregman divergence: a unified framework of density ratio estimation. Annals of the Institute of Statistical Mathematics 64:1009-1044, 2012. M. Sugiyama, M. Yamada, and M.C. du Plessis. Learning under non-stationarity:covariate shift and class-balance change. WIREs Computational Statistics, 5:465-477, 2013. M. Yamada, T. Suzuki, T. Kanamori, H. Hachiya, and M. Sugiyama. Relative density-ratio estimation for robust distribution comparison. In Advances in Neural Information Processing Systems (NIPS), pp. 594-602, 2011. J. Zhao, M. Mathieu, and Y. LeCun. Energy-based Generative Adversarial Network. ArXiv e-prints September 2016."}, {"section_index": "10", "section_name": "A CIFAR-10 DATASET", "section_text": "Figure 5 shows samples generated randomly using b-GANs. These results indicate that b-GANs can create natural images successfully. We did not conduct a Parzen window density estimation for the evaluations because of Theis et al., [2016].\nFigure 5: (Left) original images set, (middle) a set of images generated based on Pearson divergence and (right) a set of images based on the KL divergence.\nHere, we describe the network architecture of rep(x) and Geg(z) used in the b-GAN. BN is the batch normalization layer (Sergey & Christian 2015)\nx > Conv(3, 64) -> lRelu > Conv(64, 256) -> BN -> lRelu -> Conv(256, 512) -> BN > lRelu - Reshape(4 4 512) -> Linear(4 4 512, 1) -> 2 Sigmoid\nWe also applied our algorithm to the Celeb A data set. The images are resized and cropped to 64 64 pixcels. Figures 6 and 7 show samples randomly generated using b-GANs. The Network architecture. is as follows.\nx -> Conv(3, 64) -> lRelu -> Conv(64, 128) -> BN -> lRelu -> Conv(128, 256) -> BN -> lRelu - Conv(256, 512) -> BN -> lRelu -> Reshape(4 4 512) -> Linear(4 4 512, 1) -> 2Sigmoid\nFigure 6: Pearson divergence\nEx~p[f' (re(x))]- Ex~q[(f'(re(x))re(x)- f(re(x)))]=-BDf(r||re) + E\nUsing BD (r|[re) > 0 yields Eq.7] We have BD (r|[re) = 0 when r is equal to re. Thus, the equality holds if and only if r is equal to re..\nPearson divergence is preferable to KL divergence in density ratio estimation.. Relative density ratio is useful (introduced in Sec 4.2).. The meaning of bounding the last output functions of discriminators (introduced in Se\nFigure 7: KL divergence\nr.h.s of Eq.2 Ex~p(x)[f'(re(x))]- Ex~q(x)[f*(f'(re(x)))] (10 = Ex~p(x)[f'(re(x))]- Ex~q(x)[f(re(x))re(x)- f(re(x))] Ex~p(x)[f'(re(x))]- Ex~q(x)[(f'(ro(x))re(x)- f(re(x)))] = the negative of Eq. 5.\nOur objective is estimating r(x) = p(r) . . We define hypothesis sets as H. In Section 3.4, we assume q (x) H as parametric models for simplicity. In this section, we do not restrict H to parametric models. We define ro and rs as\nro argminheHBRf(h r s argminheHBRf(h)\nro argminheHBRf(h) rs argminheHBRf(h)\nBRf(rs) - BRf(r) BRf(rs)- BRf(ro) + BRf(ro)- BRf(r) = < 2sup|BRf(h) - BRf(h)[+ BRf(ro)- BRf(r) hEH\nEx~p(x)[0.5h(x)2]- Ex~q(x)[h(x)]\nWe denote the Rademacher complexity of H as Am,g when x is sampled from q(x) and the sample size is m. The first term of Eq.|12|is upper bounded by Talagrand's lemma. For any with probability as least 1 - o, we have\nlog sup|Ex~p(x)[0.5h(x)2] 0.5h(xi)2| CRm,p(H) + 0.5C2 2m hEH\nlog s 1 sup|Ex~q(x)[h(x)] - >`h(xi)|<Rm,q(H)+C m 2m hEH\n10g 28 1 2CRm,p(H) + 2Rm,q(H) + (C2 + 2C) 2m\nwhere BR is an empirical approximation of BR. Note that BR(h) reaches minimum values if and only if r = h holds. We want to analyze BRf(rs) - BRf(r). The regret, i.e., BRf(rs) - BRf(r) can be bounded as follows:\nf(rs) - BRf(r) BRf(rs) - BRf(ro)+ BRf(ro)- BRf(r) 2sup|BRf(h) - BRf(h)[+ BRf(ro)- BRf(r). (1 hEH\nThe first term of Eq. 11|is bounded by uniform law of large numbers. To do that, assume that. all elements in H are bounded by constant C. First, we consider the case when f is the Pearson divergence. In that case, BR f(h) is.\nwhere samples are taken from p(x) independently. the fact that 0.5h(x)2 is C-Lipchitz over the interval (0, C) is used (C is a constant real number). The first term of Eq.[12|is upper-bounded by Talagrand's lemma. For any & with probability at least 1 - , we have\nwhere samples are taken from q(x) independently. By combining Eq.[13|and Eq.14 for any with probability as least 1 , the first term of Eq.11is bounded by.\nEx~q(x)[h(x)] - Ex~p(x)[log h(x)]\nThe second term of Eq.16|cannot be bounded because log(x) is no longer Lipchitz continuous over (0, C). This explains why Pearson divergence is preferable to KL divergence in density ratio estimation. However, if h(x) is lower-bounded by the constant real number C, the problem is solved. Using the relative density ratio has the same effect..\nIn our setting, r(x) is a sigmoid function multiplied by C. If C is large, the approximation error i.e., BR(ro) - BRf(r) is small. However, there is a possibility that the estimation error increases according to Eq.[15] which may lead to the learning instability."}, {"section_index": "11", "section_name": "E.1 NOTATION", "section_text": "We summarize notations frequently used in this section\nFollowing Kanamori et al.(2012), we expand the explanation of density ratio estimation. As noted in Section 3, the objective function of density ratio estimation is\n(re)=\nIn this section, we review researches related to density ratio estimation, entangling them to b-GAN The Eq. 4|has also been used in class probability density estimation and unnormalized models research. We briefly describe research that is closely connected to density ratio estimation and extract ideas that can also be applied to b-GAN.\nr(x) = p(x)/q(x) Density ratios between p(x) and q(x). In b-GAN, p(x) is the probability density function of \"'real data\"' and q(x) is the probability density function of 'generated data\". (n) 9 Samples taken from p(x). (d) Samples taken from q(x). = (p, q, ) The joint distribution of X and Y. The variable X has the the mixture distribution of p + (1 - )q and Y is a binary label taking values in {-1, +1} which satisfies = P(Y = 1); thus, (p, q) = (P(XY = 1), P(X|Y = -1)) holds. n The Bayes optimal estimator P(Y = 1|X) for binary classification. . l : {-1, 1} [0, 1] -> R Loss function . h1 : X -> [0, 1] An element of hypothesis sets. In this case, the objective is estimating P(Y = 1|x). .h9 : X -> V An element of hypothesis sets. V is a subset of R. Note that h1 is a special case of hg. : 0, 1-> V A link function k(x, ) A positive definite and characteristic (Fukumizu et al.||2004) kernel on the measurable space of R. .H A reproducing kernel Hilbert space with a kernel k (Steinwart|2011). An inner product in Hk is (,:) : R > Hg A characteristic function:x -> k(x, ). X, A random variable with probability density function p. B f [pl[q] Bregman divergence with f between p and q\naBRf(re) de\nn 1 n i=1 i=1\nEfficiency is not the absolute criterion for choosing losses. For example, when there are many outliers. in the data, robustness is more important than efficiency. In reality, an M estimator was introduced in the contest of robust statistics (Huber & Ronchetti2009).\nWe have explained density ratio estimation, starting with the Bregman divergence. Importantly density-ratio estimation is equivalent to class probability estimation. For details, see (Reid &. Williamson2011}2010] Dawid & Musio| 2014] Menon & Ong2016]\nA proper loss is naturally extended to a composite proper loss by introducing a link function . In this case, the objective is estimating (n(x)) correctly. The estimator is obtained by the empirical mini mization of full risk E(x,Y)~D[l*(Y,h9)] when l*(Y, h9) means l(Y, -1(h9(x))) and (h1(x)) is the same as h9(x). When the hypothesis h9(x), which minimizes E(x,y)~D[l$(Y, h9)], is the Bayes optimal estimator (n(x)) uniquely, such a loss is called a proper composite loss with a link function .\nThe conditional risk (conditioned on x) E can be decomposed to\nThe conditional Bayes risk is\nnL1(nx)))+(1-n)L_1(n(x)))=n+1n)+(1-n)_1n)\nL(h9; D,l) - L( o n; D,l) Ex [Bc[n(X)||-1(h9(X))]]\n' In usual moment matching, s(x) is not be dependent on the form of probability density function. However. it depends on the form of re in this case..\nBy differentiating Eq.17|with respect to 0, the above method can be regarded as a type of moment matching as follows:\nThe above Eq.18|is a form of moment matching|1|What is the optimal s(x)? Here, we focus on the. variance of the estimator (efficiency). It is known that the Eq. 17 derived from the logistic model. is known to be optimal (Qin| 1998). It is a natural consequence because the logistic model can. be considered as a maximum likelihood and maximum likelihood reaches the Cramer-Rao bound. asymptotically. Typically, general moment matching (GMM) achieves the lower bound when the. estimating equation is the score of observations. i.e.. when GMM is identical to maximum likelihood\nAs in Section E.1, we introduce the variable Y. The joint distribution of X and Y is denoted as D = (p, q, ). Class probability estimation can be regraded as a minimization problem of the empirical estimation of the full risk E(x,y)~[l(Y, h1)], which is denoted L(h1;D,l) (h1 is a hypothesis element). When the hypothesis h1, which minimizes E(x,y)~[l(Y, h1)], is the Bayes optimal estimator n(x) uniquely, such a loss is called a proper loss.\nEy~n[l*(Y,h9)]=nL1(h9)+(1-n)L_1(h9)\nThe problem of minimizing the composite proper loss turns out be the density ratio estimation problem by setting (x) as u/(1 - u) and = 0.5 (Menon & Ong 2016). In this case, the LHS of Eq.22|can be written as Ex~q[co[(n(X)|h9(X)]]((n(X r(x)), where c is given by\nx c:x->1+x x\nThe equation Ex~q [Bco [r(X)||h9(X)]] corresponds to Eq.4[by substituting c with f and h9 with re.\nWhat loss is the optimal loss? The above loss can be written in another form using weight by transforming Eq.22|further. Determining what loss is better has been analyzed from the perspective of weight. For example,Reid & Williamson (2010) proposed the 'minimal symmetric convex proper loss\"' for surrogate loss. Regarding density ratio,Menon & Ong (2016) suggest that Pearson divergence is robust because the weight of Pearson divergence is uniform. However, according to their covariate shift experiment, Pearson divergence was not significantly superior to other divergence.\nWhat is the difference between density ratio estimation (class probability estimation) and classifica. tion? The objective and assumption differ. As for assumption, in density ratio, the situation where p(x) and q(x) are overlapping would be preferable. However, in classification, the situation where. p(x) and q(x) separate would be preferable. In addition, the objective of classification is slightly. different from estimating a Bayes rule correctly. Margin loss is widely used in classification rather than zero-one loss. The theoretical guarantee of using margin loss for classification is that it is. included in class calibrated loss (Bartlett et al.]2006). However, the margin loss is not equivalent to a. proper loss, i.e, it is often not suitable for estimating n. The condition whereby margin loss is a proper. loss is explained byReid & Williamson (2010). A GAN using margin loss has been proposed (Zhao. et al.|2016). Note that using margin loss is not supposed in b-GAN. They succeeded in generating. high resolution images.\nWhat is robust loss and divergence? Basu et al. (1998) proposed a robust divergence called powe divergence (Basu et al.| 1998], which is given as v [p[q], where v is\n1 -(+1)x+)\nDawid et al.[(2015) analyze robust proper loss from the perspective of influence function. They. proposed a concept of B-robustness from the perspective of influence function. It is stated that using. KL divergence and reversed KL divergence is not robust because the second derivative of f is not bounded at 0. That is a similar conclusion to our analysis in Appendix D.."}, {"section_index": "12", "section_name": "E.5 KERNEL METHODS", "section_text": "We assume that k(x, ) is a positive definite kernel. When X is a random variable taking values in R and I(X) is a random variable taking values in Hg with a characteristic map : x -> k(, x), we can think of the mean of random variable (X) denoted as mx taking values in Hk, which satisfy (f,mx) = E[<f,I(X))]= E[f(x)](Vf E Hk) and mx(y) =(mx,k(,y)) = E[k(X,y)]\nAs the density ratio estimation methods using kernels, the objective function is the empirical approxi mation of |mx - m'x.?.. As generative moment matching networks (GMMN), the objective.\nThis is also called -divergence (Amari & Cichoki]2010). The robust estimation equation is derived from power divergence compared to maximum likelihood. By setting f as v, robust density ratio estimation has been proposed (Sugiyama et al.|2012). In this case, the objective function is Ex~q(x)[Bv3[r|re]].\nfunction is ||m'x, - m'x, |. Dziugaite et al. 2015fLi et al.2015). GMMNs seem to be superior to b-GAN because they can be trained without density ratio. However, the choice of kernels is difficult In addition, an autoencoder appears to be required for generating complex data..\nWe consider the problem of f-divergence estimation between p(x) and q(x). This is applied straight forwardly to a two-sample test. Variational f-divergence estimation using Eq. 3 is proposed (Nguyen et al.[[2010). In addition, the two step method, i.e., first estimating density ratio and then estimating f-divergence, is proposed (Kanamori et al.]2012). This method is also applied to a two-sample test. The latter method is similar to b-GAN. A kernel two sample test is also introduced calculating mx H. in (Gretton et al.2012).\nIn this case, the objective is estimating 0 = {, C} when the log-likelihood of normalized mode log p(x; 0) is equal to log p(x; $) + C and C is a normalizing constant. Compared to b-GAN, the auxiliary distribution q(x) is known. The parameter 0 can be estimated as a minimization problem ol Eq.17|by replacing re(x) with p(x; 0)/q(x) (Pihlaja et al.2010). As similar algorithm, the method estimating re first, then estimating p(x; 0) as re(x)q(x) is suggested byGutmann & Hirayama(2011 Note that the latter method is similar to b-GAN.\nPihlaja et al. (2011) analyzed what loss is better by differentiating loss. They experimentally confirmed that noise contrastive estimation is robust with respect to the choice of the auxiliary distribution."}] |
BkbY4psgg | [{"section_index": "0", "section_name": "MAKING NEURAL PROGRAMMING ARCHITECTURES GENERALIZE VIA RECURSION", "section_text": "Jonathon Cai, Richard Shin, Dawn Song\nDepartment of Computer Science University of California, Berkeley\njonathon, ricshin,dawnsong}@cs.berkeley.edu\nEmpirically, neural networks that attempt to learn programs from data have exhib- ited poor generalizability. Moreover, it has traditionally been difficult to reason about the behavior of these models beyond a certain level of input complexity. In order to address these issues, we propose augmenting neural architectures with a key abstraction: recursion. As an application, we implement recursion in the Neural Programmer-Interpreter framework on four tasks: grade-school addition. bubble sort, topological sort, and quicksort. We demonstrate superior generaliz- ability and interpretability with small amounts of training data. Recursion divides the problem into smaller pieces and drastically reduces the domain of each neu- ral network component, making it tractable to prove guarantees about the overall system's behavior. Our experience suggests that in order for neural architectures to robustly learn program semantics, it is necessary to incorporate a concept like recursion."}, {"section_index": "1", "section_name": "INTRODUCTION", "section_text": "Training neural networks to synthesize robust programs from a small number of examples is a chal. lenging task. The space of possible programs is extremely large, and composing a program that per. forms robustly on the infinite space of possible inputs is difficult-in part because it is impractical to obtain enough training examples to easily disambiguate amongst all possible programs. Never. theless, we would like the model to quickly learn to represent the right semantics of the underlying. program from a small number of training examples, not an exhaustive number of them..\nThus far, to evaluate the efficacy of neural models on programming tasks, the only metric that ha been used is generalization of expected behavior to inputs of greater complexity (Vinyals et al 2015).Kaiser & Sutskever(2015),Reed & de Freitas(2016), Graves et al.(2016),Zaremba et al 2016)). For example, for the addition task, the model is trained on short inputs and then tested or its ability to sum inputs with much longer numbers of digits. Empirically, existing models suffe from a common limitation-generalization becomes poor beyond a threshold level of complexity Errors arise due to undesirable and uninterpretable dependencies and associations the architecture earns to store in some high-dimensional hidden state. This makes it difficult to reason about wha the model will do when given complex inputs.\nOne common strategy to improve generalization is to use curriculum learning, where the model is. trained on inputs of gradually increasing complexity. However, models that make use of this strategy. eventually fail after a certain level of complexity (e.g. the single-digit multiplication task in|Zaremba et al.(2016), the bubble sort task in|Reed & de Freitas (2016), and the graph tasks in|Graves et al.. (2016)). In this version of curriculum learning, even though the inputs are gradually becoming more. complex, the semantics of the program is succinct and does not change. Although the model is exposed to more and more data, it might learn spurious and overly complex representations of the program, as suggested inZaremba et al.[(2016). That is to say, the network does not learn the true. program semantics.\nIn this paper, we propose to resolve these issues by explicitly incorporating recursion into neural architectures. Recursion is an important concept in programming languages and a critical tool to"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "reduce the complexity of programs. We find that recursion makes it easier for the network to learr the right program and generalize to unknown situations. Recursion enables provable guarantees or neural programs' behavior without needing to exhaustively enumerate all possible inputs to the pro grams. This paper is the first (to our knowledge) to investigate the important problem of provable generalization properties of neural programs. As an application, we incorporate recursion into the Neural Programmer-Interpreter architecture and consider four sample tasks: grade-school addition bubble sort, topological sort, and quicksort. Empirically, we observe that the learned recursive pro grams solve all valid inputs with 100% accuracy after training on a very small number of examples out-performing previous generalization results. Given verification sets that cover all the base cases and reduction rules, we can provide proofs that these learned programs generalize perfectly. This is the first time one can provide provable guarantees of perfect generalization for neural programs.\n2 THE PROBLEM AND OUR APPROACI 21 THE PpOR OE GENEI II7 ATION\nWhen constructing a neural network for the purpose of learning a program, there are two orthogonal aspects to consider. The first is the actual model architecture. Numerous models have been proposed for learning programs; to name a few, this includes the Differentiable Neural Computer (Graves et al.]2016), Neural Turing Machine (Graves et al.]2014), Neural GPU (Kaiser & Sutskever2015) Neural Programmer (Neelakantan et al.|2015), Pointer Network (Vinyals et al.|2015), Hierarchical Attentive Memory (Andrychowicz & Kurach|2016), and Neural Random Access Machine (Kurach et al.] 2016). The architecture usually possesses some form of memory, which could be internal (such as the hidden state of a recurrent neural network) or external (such as a discrete \"scratch pad' or a memory block with differentiable access). The second is the training procedure, which consists of the form of the training data and the optimization process. Almost all architectures train on program input/output pairs. The only model, to our knowledge, that does not train on input-output pairs is the Neural Programmer-Interpreter (Reed & de Freitas|2016), which trains on synthetic execution traces.\nTo evaluate a neural network that learns a neural program to accomplish a certain task, one common evaluation metric is how well the learned model M generalizes. More specifically, when M is trained on simpler inputs, such as inputs of a small length, the generalization metric evaluates how well M will do on more complex inputs, such as inputs of much longer length. M is considered tc have perfect generalization if M can give the right answer for any input, such as inputs of arbitrary. length.\nAs mentioned in Section 1] all approaches to neural programming today fare poorly on this general. ization issue. We hypothesize that the reason for this is that the neural network learns to spuriously depend on specific characteristics of the training examples that are irrelevant to the true program. semantics, such as length of the training inputs, and thus fails to generalize to more complex inputs.\nIn addition, none of the current approaches to neural programming provide a method or even aim to. enable provable guarantees about generalization. The memory updates of these neural programs are so complex and interdependent that it is difficult to reason about the behaviors of the learned neural. program under previously unseen situations (such as problems with longer inputs). This is highly. undesirable, since being able to provide the correct answer in all possible settings is one of the most. important aspects of any learned neural program."}, {"section_index": "3", "section_name": "2.2 OUR APPROACH USING RECURSION", "section_text": "In this paper, we propose that the key abstraction of recursion is necessary for neural programs to. generalize. The general notion of recursion has been an important concept in many domains, in- cluding mathematics and computer science. In computer science, recursion (as opposed to iteration) involves solving a larger problem by combining solutions to smaller instances of the same problem.. Formally, a function exhibits recursive behavior when it possesses two properties: (1) Base cases- terminating scenarios that do not use recursion to produce answers; (2) A set of rules that reduces all other problems toward the base cases. Some functional programming languages go so far as not to. define any looping constructs but rely solely on recursion to enable repeated execution of the same. code.\nIn this paper, we propose that recursion is an important concept for neural programs as well. Ir. fact, we argue that recursion is an essential element for neural programs to generalize, and makes i tractable to prove the generalization of neural programs. Recursion can be implemented differentl. for different neural programming models. Here as a concrete and general example, we consider a. general Neural Programming Architecture (NPA), similar to Neural Programmer-Interpreter (NPI in Reed & de Freitas (2016). In this architecture, we consider a core controller, e.g., an LSTM. in NPI's case, but possibly other networks in different cases. There is a (changing) list of neura. programs used to accomplish a given task. The core controller acts as a dispatcher for the programs. At each time step, the core controller can decide to select one of the programs to call with certair. arguments. When the program is called, the current context including the caller's memory state i stored on a stack; when the program returns, the stored context is popped off the stack to resume. execution in the previous caller's context..\nIn this general Neural Programming Architecture, we show it is easy to support recursion. In par ticular, recursion can be implemented as a program calling itself. Because the context of the calle is stored on a stack when it calls another program and the callee starts in a fresh context, this en ables recursion simply by allowing a program to call itself. In practice, we can additionally use tai recursion optimization to avoid problems with the call stack growing too deep. Thus, any genera Neural Programming Architecture supporting such a call structure can be made to support recursion In particular, this condition is satisfied by NPI, and thus the NPI model naturally supports recursioi (even though the authors of NPI did not consider this aspect explicitly).\nBy nature, recursion reduces the complexity of a problem to simpler instances. Thus, recursior helps decompose a problem and makes it easier to reason about a program's behavior for previously unseen situations such as longer inputs. In particular, given that a recursion is defined by twc properties as mentioned before, the base cases and the set of reduction rules, we can prove a recursive neural program generalizes perfectly if we can prove that (1) it performs correctly on the base cases (2) it learns the reduction rules correctly. For many problems, the base cases and reduction rules usually consist of a finite (often small) number of cases. For problems where the base cases may be extremely large or infinite, such as certain forms of motor control, recursion can still help reduce the problem of generalization to these two aspects and make the generalization problem significantly simpler to handle and reason about.\nAs a concrete instantiation, we show in this paper that we can enable recursive neural programs in the NPI model, and thus enable perfectly generalizable neural programs for tasks such as sorting where the original, non-recursive NPI program fails. As aforementioned, the NPI model naturally supports recursion. However, the authors of NPI did not consider explicitly the notion of recursion and as a consequence, did not learn recursive programs. We show that by modifying the training procedure we enable the NPI model to learn recursive neural programs. As a consequence, our learned neura programs empirically achieve perfect generalization from a very small number of training examples Furthermore, given a verification input set that covers all base cases and reduction rules, we car formally prove that the learned neural programs achieve perfect generalization after verifying its behavior on the verification input set. This is the first time one can provide provable guarantees of perfect generalization for neural programs.\nWe would also like to point out that in this paper, we provide as an example one way to train a. recursive neural program, by providing a certain training execution trace to the NPI model. However. our concept of recursion for neural programs is general. In fact, it is one of our future directions tc. explore new ways to train a recursive neural program without providing explicit training executiol traces or with only partial or non-recursive traces..\nAs discussed in Section 2] the Neural Programmer-Interpreter (NPI) is an instance of a Neura Programmer Architecture and hence it naturally supports recursion. In this section, we give a brie review of the NPI architecture from Reed & de Freitas (2016) as background..\nThe NPI accesses an external environment, Q, which varies according to the task. The core module of the NPI is an LSTM controller that takes as input a slice of the current external environment, via a set of pointers, and a program and arguments to execute. NPI then outputs the return probability and next program and arguments to execute. Formally, the NPI is represented by the following se of equations:\nht = flstm(St,Pt,ht-1\nrt = fend ht),Pt+1 = Jproq ),at+1 = farg\nt is a subscript denoting the time-step; fenc is a domain-specific encoder (to be described later) tha takes in the environment slice et and arguments at; fistm represents the core module, which takes in the state st generated by fenc, a program embedding Pt E RP, and hidden LSTM state ht; fend decodes the return probability rt; fprog decodes a program key embedding Pt+1|and farg decodes arguments at+1. The outputs rt, Pt+1, at+1 are used to determine the next action, as described in Algorithm[1] If the program is primitive, the next environmental state et+1 will be affected by pi and at, i.e. et+1 ~ fenv(et, Pt, at). As with the original NPI architecture, the experiments for this paper always used a 3-tuple of integers at = (at(1), at(2), at(3)).\nAlgorithm 1 Neural programming inference\n1: Inputs: Environment observation e, program p, arguments a, stop threshold a 2: function Run(e, p, a) 3: h0,r0 4: while r < a do 5: sfenc(e,a),h flstm(s,p,h) 6: r<fend(h),P2 + fprog(h),a2 farg(h) 7: if p is a primitive function then 8: e fenv(e,p, a). 9: else 10: function RUN(e, P2, a2)\nA description of the inference procedure is given in Algorithm[1] Each step during an execution of the program does one of three things: (1) another subprogram along with associated arguments is called, as in Line 10, (2) the program writes to the environment if it is primitive, as in Line 8,. or (3) the loop is terminated if the return probability exceeds a threshold a, after which the stack frame is popped and control is returned to the caller. In all experiments, a is set to O.5. Each time a. subprogram is called, the stack depth increases..\nIt is important to emphasize that at inference time in the NPI, the hidden state of the LSTM controller is reset (to zero) at each subprogram call, as in Line 3 of Algorithm 1|(h 0). This functionality\n1The original NPI paper decodes to a program key embedding kt E RK and then computes a program. embedding pt+1, which we also did in our implementation, but we omit this for brevity.\nWe describe the details of the NPI model relevant to our contributions. We adapt machinery from the original paper slightly to fit our needs. The NPI model has three learnable components: a. task-agnostic core, a program-key embedding, and domain-specific encoders that allow the NPI to. operate in diverse environments.\nSt = fenc(et, at\nThe training data for the Neural Programmer-Interpreter consists of full execution traces for the program of interest. A single element of an execution trace consists of a step input-step output pair, which can be synthesized from Algorithm 1 this corresponds to, for a given time-step, the step input tuple (e, p, a) and step output tuple (r, p2, a2). An example of part of an addition task trace, written in shorthand, is given in Figure[1 For example, a step input-step output pair in Lines 2 and 3 of the left-hand side of Figure[1|is (ADD1, WRITE OUT 1). In this pair, the step input runs a subprogram ADD1 that has no arguments, and the step output contains a program WRITE that has arguments of OUT and 1. The environment and return probability are omitted for readability. Indentation indicates the stack is one level deeper than before.\nNon-Recursive. Recursive 1 ADD 1 ADD 2 ADD1 2 ADD1 3 WRITE OUT 1 3 WRITE OUT 1 4 CARRY 4 CARRY 5 PTR CARRY LEFT 5 PTR CARRY LEFT 6 WRITE CARRY 1 6 WRITE CARRY 1 7 PTR CARRY RIGHT 7 PTR CARRY RIGHT 8 LSHIFT 8 LSHIFT 9 PTR INP1 LEFT 9 PTR INP1 LEFT 10 PTR INP2 LEFT 10 PTR INP2 LEFT 11 PTR CARRY LEFT 11 PTR CARRY LEFT 12 PTR OUT LEFT 12 PTR OUT LEFT 13 ADD1 13 ADD 14 14\n1 ADD 1 ADD 2 ADD1 2 ADD1 3 WRITE OUT 1 3 WRITE OUT 1 4 CARRY 4 CARRY 5 PTR CARRY LEFT 5 PTR CARRY LEFT 6 WRITE CARRY 1 6 WRITE CARRY 1 7 PTR CARRY RIGHT 7 PTR CARRY RIGHT 8 LSHIFT 8 LSHIFT 9 PTR INP1 LEFT 9 PTR INP1 LEFT 10 PTR INP2 LEFT 10 PTR INP2 LEFT 11 PTR CARRY LEFT 11 PTR CARRY LEFT 12 PTR OUT LEFT 12 PTR OUT LEFT 13 ADD1 13 ADD 14 14\nFigure 1: Addition Task. The non-recursive trace loops on cycles of ADD1 and LSHIFT, whereas in the recursive version, the ADD function calls itself (bolded)..\nis critical for implementing recursion, since it permits us to restrict our attention to the currently relevant recursive call, ignoring irrelevant details about other contexts."}, {"section_index": "4", "section_name": "3.2 RECURSIVE FORMULATIONS FOR NPI PROGRAMS", "section_text": "We emphasize the overall goal of this work is to enable the learning of a recursive program. The learned recursive program is different from neural programs learned in all previous work in an impor- tant aspect: previous approaches do not explicitly incorporate this abstraction, and hence generalize poorly, whereas our learned neural programs incorporate recursion and achieve perfect generaliza tion.\nSince NPI naturally supports the notion of recursion, a key question is how to enable NPI to learn. recursive programs. We found that changing the NPI training traces is a simple way to enable this In particular, we construct new training traces which explicitly contain recursive elements and show. that with this type of trace, NPI easily learns recursive programs. In future work, we would like to. decrease supervision and construct models that are capable of coming up with recursive abstractions. themselves.\nIn what follows, we describe the way in which we constructed NPI training traces so as to make them contain recursive elements and thus enable NPI to learn recursive programs. We describe the recursive re-formulation of traces for two tasks from the original NPI paper-grade-school addition and bubble sort. For these programs, we re-use the appropriate program sets (the associated subpro-. grams), and we refer the reader to the appendix of Reed & de Freitas[(2016) for further details on the subprograms used in addition and bubble sort. Finally, we implement recursive traces for our. own topological sort and quicksort tasks.\nfenc(Q,i1,i2,i3,i4,at) = MLP([Q(1,i1), Q(2,i2), Q(3,i3),Q(4, i4),at(1),at(2),at(3)]\nwhere the environment Q E R4x N K is a scratch-pad that contains four rows (the first input num- ber, the second input number, the carry bits, and the output) and N columns. K is set to 11, to represent the range of 10 possible digits, along with a token representing the end of input|[ At any given time, the NPI has access to values pointed to by four pointers in each of the four rows, represented by Q(1, i1), Q(2, i2), Q(3, i3), and Q(4, i4).\nThe non-recursive trace loops on cycles of ADD1 and LSHIFT. ADD1 is a subprogram that adds the current column (writing the appropriate digit to the output row and carrying a bit to the next. column if needed). LSHIFT moves the four pointers to the left, to move to the next column. The program terminates when seeing no numbers in the current column..\nFigure1shows examples of non-recursive and recursive addition traces. We make the trace recursive by adding a tail recursive call into the trace for the ADD program after calling ADD1 and LSHIFT.\n2The original paper uses K = 10, but we found it necessary to augment the range with an end token, ir order to terminate properly.\nFigure 2: Bubble Sort Task. The non-recursive trace loops on cycles of BUBBLE and RESET The difference between the partial recursive and full recursive versions is in the indentation of Lines 10-15 and 20-22 (bolded), since in the full recursive version, BSTEP and LSHIFT are made tail recursive; the final calls to BSTEP and LSHIFT return immediately as they occur after the pointer reaches the end of the array. Also note that COMPSwAP conditionally swaps numbers under the bubble pointers.\nfenc(Q,i1,i2,i3,at) = MLP([Q(1, i1),Q(1, i2), i3 == length,at(1),at(2),at(3)])\nfenc(Q,i1, i2,i3,at) = MLP([Q(1, i1),Q(1, i2),i3 == length,at(1),at(2),at(3)])\nwhere the environment Q E R1x N x K is a scratch-pad that contains 1 row, to represent the state of the array as sorting proceeds in-place, and N columns. K is set to 11, to denote the range of possible numbers (0 through 9), along with the start/end token (represented with the same encoding) whicl is observed when a pointer reaches beyond the bounds of the input. At any given time, the NPI has access to the values referred to by two pointers, represented by Q(1, i1) and Q(1, i2),. The pointers at index i1 and i2 are used to compare the pair of numbers considered during the bubble sweep swapping them if the number at i1 is greater than that in i2. These pointers are referred to as bubble oointers. The pointer at index i3 represents a counter internal to the environment that is incremented once after each pass of the algorithm (one cycle of BUBBLE and RESET); when incremented a number of times equal to the length of the array, the flag i3 == length becomes true and terminates the entire algorithm .\nThe non-recursive trace loops on cycles of BUBBLE and RESET, which logically represents one. bubble sweep through the array and reset of the two bubble pointers to the very beginning of the array, respectively. In this version, there is a dependence on length: BSTEP and LSHIFT are called a number of times equivalent to one less than the length of the input array, in BUBBLE and RESET respectively.\nas in Line 13 of the right-hand side of Figure[1] Via the recursive call, we effectively forget that the column just added exists, since the recursive call to ADD starts with a new hidden state for the LSTM controller. Consequently, there is no concept of length relevant to the problem, which has traditionally been an important focus of length-based curriculum learning.\nInside BUBBLE and RESET, there are two operations that can be made recursive. BSTEP, used. in BUBBLE, compares pairs of numbers, continuously moving the bubble pointers once to the right each time until reaching the end of the array. LSHIFT, used in RESET, shifts the pointers left until reaching the start token.\nWe experiment with two levels of recursion-partial and full. Partial recursion only adds a ta recursive call to BUBBLESORT after BUBBLE and RESET, similar to the tail recursive ca. described previously for addition. The partial recursion is not enough for perfect generalization, a will be presented later in Section 4. Full recursion, in addition to making the aforementioned ta. recursive call, adds two additional recursive calls; BSTEP and LSHIFT are made tail recursive Figure 2 shows examples of traces for the different versions of bubble sort. Training on the fu. recursive trace leads to perfect generalization, as shown in Section 4. We performed experiment on the partially recursive version in order to examine what happens when only one recursive call i. implemented, when in reality three are required for perfect generalization..\nAlgorithm 2 Depth First Search Topological Sort 1: Color all vertices white 2: Initialize an empty stack S and a directed acyclic graph DAG to traverse. 3: Begin traversing from Vertex 1 in the DAG. 4: function TOPOSORT(DAG) 5: while there is still a white vertex u: do 6: color[u] = grey 7: Vactive = U 8: do 9: if vactive has a white child v then 10: color[v] = grey 11: push Vactive onto S 12: Vactive = V 13: else 14: color[Vactive] = black 15: Write Vactive to result 16: if S is empty then pass 17: else pop the top vertex off S and set it to vactive 18: while S is not empty\nTopological Sort. We choose to implement a topological sort task for graphs. A topological sor is a linear ordering of vertices such that for every directed edge (u, v) from u to v, u comes befor v in the ordering. This is possible if and only if the graph has no directed cycles; that is to say, i must be a directed acyclic graph (DAG). In our experiments, we only present DAG's as inputs anc represent the vertices as values ranging from 1, . ., n , where the DAG contains n vertices.\nDirected acyclic graphs are structurally more diverse than inputs in the two tasks of grade-schoo. addition and bubble sort. The degree for any vertex in the DAG is variable. Also the DAG can have. potentially more than one connected component, meaning it is necessary to transition between these components appropriately.\nAlgorithm 2[shows the topological sort task of interest. This algorithm is a variant of depth firs search. We created a program set that reflects the semantics of Algorithm[2] For brevity, we refe the reader to the appendix for further details on the program set and non-recursive and recursive trace-generating functions used for topological sort..\nFor topological sort, the domain-specific encoder is\nThe DAG is represented as an adjacency list where DAG[il[j] refers to the j-th child of vertex i There are 3 pointers (Presult, Pstack, Pstart), Presult points to the next empty location in Qresult:.\nwhere Qcolor E RU4 is a scratch-pad that contains U rows, each containing one of four colors. (white, gray, black, invalid) with one-hot encoding. U varies with the number of vertices in the graph. We further have Qresult E NU, a scratch-pad which contains the sorted list of vertices at. the end of execution, and Qstack E NU, which serves the role of the stack S in Algorithm2 The contents of Qresult and Qstack are not exposed directly through the domain-specific encoder; rather,. we define primitive functions which manipulate these scratch-pads..\nPstack points to the top of the stack in Qstack, and pstart points to the candidate starting node for a connected component. There are 2 variables (Vactive and Vsave); Vactive holds the active vertex (as in Algorithm 2) and vsave holds the value of vactive before executing Line 12 of Algorithm2 childList E NU is a vector of pointers, where childList[i] points to the next child under consider- ation for vertex i.\nAn alternative way of representing the environment slice is to expose the values of the absolute vertices to the model; however, this makes it difficult to scale the model to larger graphs, since large vertex values are not seen during training time.\nWe refer the reader to the appendix for the non-recursive trace generating functions. Ir the non-recursive trace, there are four functions that can be made recursive-TOPOSORT CHECK_CHILD, EXPLORE, and NEXT_START, and we add a tail recursive call to each oi these functions in order to make the recursive trace. In particular, in the EXPLORE function adding a tail recursive call resets and stores the hidden states associated with vertices in a stack-like fashion. This makes it so that we only need to consider the vertices in the subgraph that are cur rently relevant for computing the sort, allowing simpler reasoning about behavior for large graphs The sequence of primitive operations (MOVE and WRITE operations) for the non-recursive and recursive versions are exactly the same.\nQuicksort. We implement a quicksort task, in order to demonstrate that recursion helps with learn ing divide-and-conquer algorithms. We use the Lomuto partition scheme; the logic for the recursive trace is shown in Algorithm[3] For brevity, we refer the reader to the appendix for information abou the program set and non-recursive and recursive trace-generating functions for quicksort. The logi for the non-recursive trace is shown in Algorithm4lin the appendix.\nAlgorithm 3 Recursive Quicksort\nAIgorithm 3 Recursive Quicksort. 1: Initialize an array A to sort.. 2: Initialize lo and hi to be 1 and n, where n is the length of A. 3: 4: function Qu1CKSORT(A, lo, hi). 5: if lo < hi: then 6: p = PARTITION(A, lo, hi) 7: QUICKSORT(A, lo, p - 1) 8: QUICKSORT(A,p + 1, hi) 9: 10: function PARTITION(A, lo, hi) 11: pivot = lo 12: for j E [lo, hi - 1] : do 13: if A[j] < A[hi] then 14: swap A[pivot] with A[j] 15: pivot = pivot + 1 16: swap A[pivot]with A[hi 17: return pivot\nwhere Qarray E RU 11 is a scratch-pad that contains U rows, each containing one of 11 values (one of the numbers O through 9 or an invalid state). Our implementation uses two stacks QstackLo and\nThe three environment observations aid with control flow in Algorithm2 Qcolor(Pstart) contains the color of the current start vertex, used in the evaluation of the condition in the whiLE loop in Line 5 of Algorithm2] Qcolor(DAG[Vactive][childList[Vactivel]) refers to the color of the next child of Vactive, used in the evaluation of the condition in the IF branch in Line 9 of Algorithm[2] Finally, the boolean pstack == 1 is used to check whether the stack is empty in Line 18 of Algorithm2\nThere are 6 pointers (Plo,Phi,PstackLo,PstackHi,Ppivot,Pj). Plo and Phi point to the lo and hi indices of the array, as in Algorithm 3] PstackLo and pstackHi point to the top (empty) positions in QstackLo and QstackHi. Ppivot and pj point to the pivot and j indices of the array, used in the PARTITiON function in Algorithm 3] The 4 environment observations aid with control flow; QstackLo(PstackLo - 1) < QstackHi(PstackHi - 1) implements the lo < hi comparison in Line 5 of Algorithm[3] PstackLo == 1 checks if the stacks are empty in Line 18 of Algorithm4] and the other observations (all involving Ppivot Or p) deal with logic in the PARTiTiON function.\nNote that the recursion for quicksort is not purely tail recursive and therefore represents a more complex kind of recursion that is harder to learn than in the previous tasks. Also, compared to the bubble pointers in bubble sort, the pointers that perform the comparison for quicksort (the COMP- SWAP function) are usually not adjacent to each other, making quicksort less local than bubble sort. In order to compensate for this, Ppivot and pj require special functions (MOVE_PIVOT_LO and MOVE_J_LO) to properly set them to lo in Lines 11 and 12 of the PARTITiON function in Algorithm3"}, {"section_index": "5", "section_name": "3.3 PROVABLY PERFECT GENERALIZATION", "section_text": "We show that if we incorporate recursion, the learned NPI programs can achieve provably perfeci generalization for different tasks. Provably perfect generalization implies the model will behave correctly, given any valid input. In order to claim a proof, we must verify the model produces correct behavior over all base cases and reductions, as described in Section[2\nWe propose and describe our verification procedure. This procedure verifies that all base cases and. reductions are handled properly by the model via explicit tests. Note that recursion helps make this process tractable, because we only need to test a finite number of inputs to show that the model. will work correctly on inputs of unbounded complexity. This verification phase only needs to be performed once after training.\nVi E V,M(i) P(i\nwhere i denotes a sequence of step inputs (within one function call), V denotes the set of valid sequences of step inputs, M denotes the neural network model, P denotes the correct program, and P(i) denotes the next step output from the correct program. The arrow in the theorem refers to. evaluation, as in big-step semantics. The theorem states that for the same sequence of step inputs. the model produces the exact same step output as the target program it aims to learn. M, as described. in Algorithm[1] processes the sequence of step inputs by using an LSTM.\nRecursion drastically reduces the number of configurations we need to consider during the verifi. cation phase and makes the proof tractable. because it introduces structure that eliminates infinitely long sequences of step inputs that would otherwise need to be considered. For instance, for recursive. addition, consider the family F of addition problems anan-1 ... a1ao + bnbn-1 ... b1bo where no. CARRY operations occur. We prove every member of F is added properly, given that subproblems. are added nronerlv\nWithout using a recursive program, such a proof is not possible, because the non-recursive progran. runs on an arbitrarily long addition problem that creates correspondingly long sequences of ste. inputs; in the non-recursive formulation of addition, ADD calls ADD1 a number of times that i. dependent on the length of the input. The core LSTM module's hidden state is preserved over al. these ADD1 calls, and it is difficult to interpret with certainty what happens over longer timestep. without concretely evaluating the LSTM with an input of that length. In contrast, each call t. the recursive ADD always runs for a fixed number of steps, even on arbitrarily long problem. in F, so we can test that it performs correctly on a small, fixed number of step input sequences. This guarantees that the step input sequences considered during verification contain all step inpu. sequences which arise during execution of an unseen problem in F', leading to generalization to an problem in F. Hence, if all subproblems in S are added correctly, we have proven that any membe. of F will be added correctly, thus eliminating an infinite family of inputs that need to be tested..\nTo perform the verification as described here, it is critical to construct V correctly. If it is too small then execution of the program on some input might require evaluation of M(i) on some i V, and so the behavior of M(i) might deviate from P(i). If it is too large, then the semantics of P might not be well-defined on some elements in V, or the spurious step input sequences may not be reachable from any valid problem input (e.g., an array for quicksort or a DAG for topological sort).\nTo construct this set, by using the reference implementation of each subprogram, we construct a mapping between two sets of environment observations: the first set consists of all observations. that can occur at the beginning of a particular subprogram's invocation, and the second set contains the observations at the end of that subprogram. We can obtain this mapping by first considering. the possible observations that can arise at the beginning of the entry function (ADD, BUBBLE-. SORT, TOPOSORT, and QUICKSORT) for some valid program input, and iteratively applying. the observation-to-observation mapping implied by the reference implementation's step output at. that point in the execution. If the step output specifies a primitive function call, we need to reason. about how it can affect the environment so as to change the observation in the next step input. For. non-primitive subprograms, we can update the observation-to-observation mapping currently asso-. ciated with the subprogram and then apply that mapping to the current set. By iterating with this. procedure, and then running P on the input observation set that we obtain for the entry point func-. tion, we can obtain V precisely. To make an analogy to MDPs, this procedure is analogous to how. value iteration obtains the correct value for each state starting from any initialization..\nAn alternative method is to run P on many different program inputs and then observe step input. sequences which occur, to create V. However, to be sure that the generated V is complete (covers all the cases needed), we need to check all pairs of observations seen in adjacent step inputs (in par- ticular, those before and after a primitive function call), in a similar way as if we were constructing V from scratch. Given a precise definition of P, it may be possible to automate the generation of V. from P in future work.\nNote that V should also contain the necessary reductions, which corresponds to making the recursive calls at the correct time, as indicated by P..\nAfter finding V, we construct a set of problem inputs which, when executed on P, create exactly the step input sequences which make up V. We call this set of inputs the verification set, Sy.\nGiven a verification set, we can then run the model on the verification set to check if the producec traces and results are correct. If yes, then this indicates that the learned neural program achieves provably perfect generalization.\nWe note that for tasks with very large input domains, such as ones involving MNIST digits or speech samples, the state space of base cases and reduction rules could be prohibitively large, possibly infinite. Consequently, it is infeasible to construct a verification set that covers all cases, and the verification procedure we have described is inadequate. We leave this as future work to devise a verification procedure more appropriate to this setting..\nAs there is no public implementation of NPI, we implemented a version of it in Keras that is as faithful to the paper as possible. Our experiments use a small number of training examples.\nTraining Setup. The training set for addition contains 200 traces. The maximum problem length in this training set is 3 (e.g., the trace corresponding to the problem \"109 + 101'')..\nThe training set for bubble sort contains 100 traces, with maximum problem length of 2 (e.g., the trace corresponding to the array [3,21)\nThe training set for topological sort contains 6 traces, with one synthesized from a graph of size 5 and the rest synthesized from graphs of size 7..\nThe training set for quicksort contains 4 traces, synthesized from arrays of length 5\nThe same set of problems was used to generate the training traces for all formulations of the task for non-recursive and recursive versions..\nWe now report on generalization for the varying tasks\nGrade-School Addition. Both the non-recursive and recursive learned programs generalize on. all input lengths we tried. up to 5ooo digits. This agrees with the generalization of non-recursive addition in|Reed & de Freitas|(2016), where they reported generalization up to 3000 digits. However,. note that there is no provable guarantee that the non-recursive learned program will generalize to all. inputs, whereas we show later that the recursive learned program has a provable guarantee of perfect. generalization.\nIn order to demonstrate that recursion can help learn and generalize better, for addition, we trainec only on traces for 5 arbitrarily chosen 1-digit addition sum examples. The recursive version can gen eralize perfectly to long problems constructed from these components (such as the sum \"822+233\" where \"8+2\" and \"2+3\" are in the training set), but the non-recursive version fails to sum these long problems properly.\nBubble Sort. Table 1 presents results on randomly generated arrays of varying length for the learned non-recursive, partially recursive, and full recursive programs. For each length, we test each program on 30 randomly generated problems. Observe that partially recursive does slightly better than non-recursive for the setting in which the length of the array is 3, and that the fully recursive version is able to sort every array given to it. The non-recursive and partially recursive versions are unable to sort long arrays, beyond length 8.\nTopological Sort. Both the non-recursive and recursive learned programs generalize on all graphs we tried, up to 120 vertices. As before, the non-recursive learned program lacks a provable guarantee of generalization, whereas we show later that the recursive learned program has one\nIn order to demonstrate that recursion can help learn and generalize better, we trained a non-recursive and recursive model on just a single execution trace generated from a graph containing 5 nodes3 for the topological sort task. For these models, Table 2 presents results on randomly generated DAGs of varying graph sizes (varying in the number of vertices). For each graph size, we test the learned programs on 30 randomly generated DAGs. The recursive version of topological sort solves all grapl instances we tried, from graphs of size 5 through 70. On the other hand, the non-recursive versior has low accuracy, beginning from size 5, and fails completely for graphs of size 8 and beyond.\nQuicksort. Table 3 presents results on randomly generated arrays of varying length for the learnec. non-recursive and recursive programs. For each length, we test each program on 30 randomly gen erated problems. Observe that the non-recursive program's correctness degrades for length 11 anc beyond, while the recursive program can sort any given array..\n3The corresponding edge list is [(1, 2), (1, 5), (2, 4), (2, 5), (3, 5)]\nTable 1: Accurac y on Randomly Generated Problems for Bubble Sort\nLength of Array Non-Recursive Partially Recursive Full Recursive\nLength of Array Non-Recursive Partially Recursive Full Recursive 2 100% 100% 100% 3 6.7% 23% 100% 4 10% 10% 100% 8 0% 0% 100% 20 0% 0% 100% 90 0% 0% 100%\nTable 3: Accuracy on Randomly Generated Problems for Quicksort\nAs mentioned in Section 2.1, we hypothesize the non-recursive programs do not generalize well because they have learned spurious dependencies specific to the training set, such as length of the input problems. On the other hand, the recursive programs have learned the true program semantics."}, {"section_index": "6", "section_name": "4.2 VERIFICATION OF PROVABLY PERFECT GENERALIZATION", "section_text": "We describe how models trained with recursive traces can be proven to generalize, by using the verification procedure described in Section 3.3. As described in the verification procedure, it is possible to prove our learned recursive program generalizes perfectly by testing on an appropriate set of problem inputs, i.e., the verification set. Recall that this verification procedure cannot be performed for the non-recursive versions, since the propagation of the hidden state in the core LSTM module makes reasoning difficult and so we would need to check an unbounded number of examples\nWe describe the base cases, reduction rules, and the verification set for each task in Appendix|A.6 For each task, given the verification set, we check the traces and results of the learned, to-be-verified neural program (described in Section |4.1] and for bubble sort, Appendix [A.6) on the verificatior set, and ensure they match the traces produced by the true program P. Our results show that for all learned, to-be-verified neural programs, they all produced the same traces as those produced by I on the verification set. Thus, we demonstrate that recursion enables provably perfect generalizatior for different tasks, including addition, topological sort, quicksort, and a variant of bubble sort.\nWe emphasize that the notion of a neural recursive program has not been presented in the literature before: this is our main contribution. Recursion enables provably perfect generalization. To the best of our knowledge, this is the first time verification has been applied to a neural program, provid.\nTable 2: Accuracy on Randomly Generated Problems for Topological Sort\nNumber of Vertices Non-Recursive Recursive\n5 6.7% 100% 6 6.7% 100% 7 3.3% 100% 8 0% 100% 70 0% 100%\nLength of Array Non-Recursive Recursive\ngth of Array Non-Recursive Recursive 3 100% 100% 5 100% 100% 7 100% 100% 11 73.3% 100% 15 60% 100% 20 30% 100% 22 20% 100% 25 3.33% 100% 30 3.33% 100% 70 0% 100%\nNote that the training set can often be considerably smaller than the verification set, and despite this, the learned model can still pass the entire verification set. Our result shows that the training. procedure and the NPI architecture is capable of generalizing from the step input-output pairs seen. n the training data to the unseen ones present in the verification set..\ning provable guarantees about its behavior. We instantiated recursion for the Neural Programmer Interpreter by changing the training traces. In future work, we seek to enable more tasks witl recursive structure. We also hope to decrease supervision, for example by training with only partia or non-recursive traces, and to develop novel Neural Programming Architectures integrated directly with a notion of recursion."}, {"section_index": "7", "section_name": "ACKNOWLEDGMENTS", "section_text": "This material is in part based upon work supported by the National Science Foundation under Grant No. TWC-1409915. DARPA under Grant No. FA8750-15-2-0104, and Berkeley Deep Drive. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of National Science Foundation and DARPA"}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "Marcin Andrychowicz and Karol Kurach. Learning efficient algorithms with hierarchical attentiye memory. CoRR, abs/1602.03218,2016. URLhttp://arxiv.org/abs/1602.03218\nArvind Neelakantan, Quoc V. Le, and Ilya Sutskever. Neural programmer: Inducing latent programs with gradient descent, 2015.\nScott Reed and Nando de Freitas. Neural programmer-interpreters. ICLR, 2016\nOriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In Advances in Neural In formation Processing Systems 28: Annual Conference on Neural Information Processing Sys- tems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pp. 2692-2700, 2015.URL http://papers.nips.cc/paper/5866-pointer-networks\nAlex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska Barwiska, Sergio Gmez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, Adri Puigdomnech Badia, Karl Moritz Hermann, Yori Zwols, Georg Ostrovski, Adam Cain. Helen King, Christopher Summerfield, Phil Blunsom, Koray Kavukcuoglu, and Demis Hass- abis. Hybrid computing using a neural network with dynamic external memory. Nature, 538 (7626):471-476, October 2016. ISSN 0028-0836, 1476-4687. doi: 10.1038/nature20101. URL ht.t.p :/ /wWWnat1lre om/doi finder/10 1038/nature201C\nKarol Kurach. Marcin Andrychowicz, and Ilya Sutskever. Neural random access machines. ERCIM News, 2016(107),2016. URL http://ercim-news.ercim.eu/en107/special/ neural-random-access-machines\nWojciech Zaremba, Tomas Mikolov, Armand Joulin, and Rob Fergus. Learning simple algorithms from examples. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, pp. 421-429, 2016. URLhttp: / / jm1r. org/proceedings/papers/v48/zaremba16.html\nA.1 PROGRAM SET FOR NON-RECURSIVE TOPOLOGICAL SORT\nProgram Descriptions Calls Arguments TOPOSORT Perform topological TRAVERSE NONE sort on graph NEXT_START, WRITE, MOVE TRAVERSE Traverse graph until CHECK_CHILD, EX- NONE stack is empty PLORE CHECK_CHILD Check if a white MOVE NONE child exists; if so, set childList[Vactive] to point to it EXPLORE Repeatedly traverse STACK, NONE subgraphs until stack CHECK_CHILD, is empty WRITE, MOVE STACK Interact with stack. WRITE, MOVE PUSH, POP either pushing popping NEXT_START Move Pstart until MOVE NONE reaching a white vertex. If a white vertex is found, set Pstart to point to it; this signifies the start of a traversal of a new connected com- ponent. If no white vertex is found, the entire execution is terminated WRITE Write a value either NONE Described below to environment (e.g., to color a vertex) or variable (e.g., to change the value of Vactive) MOVE Move a pointer NONE Described below (e.g., Pstart or childList[Vactive]) up or down\nTraverse graph until stack is empty if uhit\nchild exists: if so, set childList[Vactive] to point to it\nInteract with stack, either pushing or popping\nto color a vertex) or variable (e.g., to change the value of Vactive Move 2 nointer\n(e.g., Pstart or childList[Vactive] up or down"}, {"section_index": "9", "section_name": "Argument Sets for WRITE and MOVE", "section_text": "WRITE. The WRITE operation has the following arguments:\nCOLOR_CURR colors Vactive,( COLOR_NEXT colors Vertex DAG[Vactive][childList[Vactive]] ACTIVE_START writes Pstart to Vactive, ACTIVE_NEIGHB writes DAG[Vactive][childList[Vactive]]toVactive, ACTIVE_STACK writes Qstack(Pstack)to Vactive, SAVE writes Vactive to Vsave, STACK_PUSH pushes Vactive to the top of the stack, STACK_POP writes a null value to the top of the stack, and RESU LT writes Vactive to result(Presult)\nCOLOR_GREY and COLOR_BLACK color the given vertex grey and black, respectively\nDescribed below\nRG_1 (Pointer): Presult, Pstack, Pstart, childList[Vactive], childList[Vsave\nNote that the argument is the identity of the pointer, not what the pointer points to; in other words ARG_1 can only take one of 5 values..\nARG_2 (Increment or Decrement): UP. DOWN\n// Top level topological sort call TOPOSORT() { while (Qcolor(Pstart) is a valid color): // color invalid when all vertices explored WRITE (ACTIVE_START) WRITE (COLOR_CURR, COLOR_GREY) TRAVERSE () MOVE(Pstart, UP) NEXT_START() TRAVERSE() { CHECK_CHILD () EXPLORE() CHECK_CHILD (){ while (Qcolor(DAG[vactive[childList[vactivell) is not white and is not invalid): // colo MOVE(childList[Vactive], UP) EXPLORE() { do if (Qcolor(DAG[vactive][childList[Vactivel]) is white) : WRITE (COLOR_NEXT, COLOR_GREY) STACK(PUSH) WRITE (SAVE) WRITE (ACTIVE_NEIGHB) MOVE(childList[Vsave],UP) else: WRITE (COLOR_CURR, COLOR_BLACK) WRITE (RESULT) MOVE(Presult, UP) if(Pstack ) -1) : break else: STACK(POP) CHECK_CHILD () while (true). STACK(op) { if (op =- PUSH) : WRITE (STACK_PUSH) MOVE(Pstack, UP) if (op =- POP): WRITE (ACTIVE_STACK) WRITE (STACK_POP) MOVE(Pstack, DOWN) NEXT_START() { while(Qcolor(Pstart) is not white and is not invalid): // color invalid when all vertice MOVE(Pstart, UP)\nAlgorithm 4 Iterative Ouicksort\nAlgorithm 4 1terauve Quicksort. 1: Initialize an array A to sort and two empty stacks Slo and Shi 2: Initialize lo and hi to be 1 and n, where n is the length of A. 3: 4: function PARTITION(A, lo, hi) 5: pivot = lo 6: for j E [lo, hi - 1] : do 7: if A[j] < A[hi] then 8: swap A[pivot] with A[j] 9: pivot = pivot + 1 10: swap A[pivot] with A[hi] 11: return pivot 12: 13: function QuiCKSORT(A, lo, hi). 14: while Sto and Shi are not empty: do. 15: Pop states off Sto and Sni, writing them to lo and hi.. 16: p = PARTITION(A, lo, hi) 17: Push p + 1 and hi to Sto and Shi.. 18: Push lo and p - 1 to Sto and Shi..\nA.4 PROGRAM SET FOR OUICKSORT\nProgram Descriptions Calls Arguments QUICKSORT Run the quicksort. Non-Recursive: PAR- Implicitly: array routine in place for. TITION, STACK, A to sort, lo, hi the array A, for. WRITE indices from lo to hi. Recursive: same as non-recursive version,. along with QUICK- SORT PARTITION Runs the partition COMPSWAP_LOOP NONE function. At end, MOVE_PIVOT_LO, pointer Ppivot is MOVE_J_LO, SWAP moved to the pivot. COMPSWAP LOOP Runs the FOR loop. COMPSWAP, MOVE NONE inside the partition. function COMPSWAP Compares SWAP, MOVE NONE A[pivot] A[j]; if so, perform a swap. and increment ppivot. SET_PIVOT_LO Sets Ppivot to lo in- NONE NONE dex SET_J_LO Sets p; to lo index. NONE NONE SET_J_NULL Sets p; to 00 NONE NONE STACK Pushes lo/hi states WRITE. MOVE Described below. onto stacks Sto and. Shi according to argument (described) below) MOVE Moves pointer one. NONE Described below. unit up or down SWAP Swaps elements at NONE Described below. given array indices. WRITE Write a value NONE Described below. either to stack (e.g., Q stack Lo or QstackHi) or to pointer (e.g., to change the value of. Phi)\nCOMPSWAP LOOP\nonto stacks Sio and Shi according to argument (described below)\npointer (e.g., to change the value of Phi)\nSTACK. The STACK operation has the following arguments:\nARG_1 (Operation): STACK_PUSH_CALL1, STACK_PUSH_CALL2, STACK_POP\nMOVE. The MOVE operation has the following arguments:\nARG_1 (Pointer): PstackLo, PstackHi, Pj, Ppivot\nNote that the argument is the identity of the pointer, not what the pointer points to; in other words ARG_1 can only take one of 4 values..\nSTACK_PUSH_CALL1 pushes lo and pivot-1 to QstackLo and QstackHi. STACK_PUSH_CALL2 pushes pivot + 1 and hi to QstackLo and QstackHi. STACK POP pushes -00 values to QstackLo and Q stack Hi.\nSWAP. The SWAP operation has the following arguments:\nWRITE. The WRITE operation has the following arguments:\nENV_STACK_LO and ENV_STACK_HI represent QstackLo(PstackLo) and QstackHi(PstackHi), re spectively.\nNote that the argument is the identity of the pointer, not what the pointer points to; in other words. ARG_1 can only take one of 4 values, and ARG_2 can only take one of 7 values.\nA.5.1 NON-RECURSIVE TRACE-GENERATING FUNCTIONS\n1 Initialize Plo to 1 and Phi to n (length o 2 Initialize pj to -oo 3 4 QUICKSORT() { 5 while (PstackLo/ 1): 6 if (QstackLo(PstackLo-1)< QstackHi 7 STACK(STACK_POP) 8 else: 9 WRITE(Phi, ENV_STACK_HI_PEEK) 10 WRITE(Plo, ENV_STACK_LO_PEEK) 11 STACK(STACK_POP) 12 PARTITION() 13 STACK (STACK_PUSH_CALL2) 14 STACK(STACK_PUSH_CALL1) 15 } 16 17 PARTITION() { 18 SET_PIVOT_LO() 19 SET_J_LO() 20 COMPSWAP_LOOP () 21 SWAP(Ppivot,Phi) 22 SET_J_NULL() 23 } 24 25 COMPSWAP_LOOP () { 26 while (pj /Phi): 27 COMPSWAP () 28 MOVE(Pj, UP) 29 } 30 31 COMPSWAP() { 32 if (A[pj]< A[phi]) : 33 SWAP(Ppivot;Pj) 34 MOVE(Ppivot, UP) 35 } 36 37 STACK(op) { 38 if (op == STACK_PUSH_CALL1) : 39 WRITE(ENV_STACK_LO, Plo) 40 WRITE(ENV_STACK_HI, Ppivot -1) 41 MOVE(PstackLo, UP) 42 MOVE(PstackHi, UP) 43 44 if (op == STACK_PUSH_CALL2): 45 WRITE(ENV_STACK_LO, Ppivot+1) 46 WRITE(ENV_STACK_HI, Phi) 47 MOVE(PstackLo, UP) 48 MOVE(PstackHi, UP) 49 50 if (op == STACK_POP): 51 WRITE (ENV_STACK_LO, RESET) 52 WRITE (ENV_STACK_HI, RESET) 53 MOVE(PstackLo, DOWN) 54 MOVE(PstackHi, DOWN) 55 }\nn this section, we describe the space of base cases and reduction rules that must be covered for eac of the four sample tasks, in order to create the verification set.\nBase Cases and Reduction Rules for Addition. For the recursive formulation of addition, we analytically construct the set of input problems that cover all base cases and reduction rules. We. outline how to construct this set..\nIt is sufficient to construct problems where every transition between two adjacent columns is cov ered. The ADD reduction rule ensures that each call to ADD only covers two adjacent columns and so the LSTM only ever runs for a fixed number of steps necessary to process these two columns\nWe construct input problems by splitting into two cases: one case in which the left column contains. a null value and another in which the left column does not contain any null values. We then construct. problem configurations that span all possible valid environment states (for instance, in order to force the carry bit in a column to be 1, one can add the sum \"1+9' in the column to the right)..\nThe operations we need to be concerned most about are CARRY and LSHIFT, which induce partial environment states spanning two columns. It is straightforward to deal with all other operations. which do not induce partial environment states..\nUnder the assumption that there are no leading O's (except in the case of single digits) and the two numbers to be added have the same number of digits, the verification set for addition contains 20,181 input problems. The assumption of leading O's can be easily removed, at the cost of slightly increasing the size of the verification set. We made the assumption of equivalent lengths in order to parametrize the input format with respect to length, but this assumption can be removed as well.\nBase Cases and Reduction Rules for Bubble Sort. The original version of the bubblesort im-. plementation exposes the values within the array. While this matches the description from Reed & de Freitas(2016), we found that this causes an unnecessary blowup in the size of V and makes it. much more difficult to construct the verification set. For purposes of verification, we replace the domain-specific encoder with the following:\nFor addition, we analytically determine the verification set. For tasks other than addition, it is diffi cult to analytically determine the verification set, so instead, we randomly generate input candidates until they completely cover the base cases and reduction rules.\nTable 4: Accuracy on Randomly Generated Problems for Variant of Bubble Sor\nwhich directly exposes which of the two values pointed to is larger. This modification also enables us to sort arrays containing arbitrary comparable elements..\nWe also report on generalization results for the non-recursive and recursive versions of this variant of bubble sort. Table 4 demonstrates that the accuracy of the non-recursive program degrades sharply when moving from arrays of length 7 to arrays of length 8. This is due to the properties of the training set -- we trained on 2 traces synthesized from arrays of length 7 and 1 trace synthesized from an array of length 6. Table 4 also demonstrates that the (verified) recursive program generalizes perfectly.\nBase Cases and Reduction Rules for Topological Sort. For each function we use to implement. the recursive version of topological sort, we need to consider the set of possible environment obser- vation sequences we can create from all valid inputs and test that the learned program produces the correct behavior on each of these inputs. We have three observations: the color of the start node, the color of the active node's next child to be considered, and whether the stack is empty. Naively, we might expect to synthesize and test an input for any sequence created by combining the four possible colors in two variables and another boolean variable for whether the stack is empty (so 32 possible. observations at any point), but for various reasons, most of these combinations are impossible to. occur at any given point in the execution trace..\nThrough careful reasoning about the possible set of environment observations created by all valid inputs, and how each of the operations in the execution trace affects the environment, we can con- struct V using the procedure described in Section|3.3 We then construct a verification set of size 73 by ensuring that randomly generated graphs cover the analytically derived V. The model described in the training setup of Section 4(trained on 6 traces) was verified to be correct via the matching procedure described in Section4.2\nBase Cases and Reduction Rules for Quicksort. As with the others, we apply the procedure. described in Section|3.3 to construct V and then empirically create a verification set which covers. V. The verification set can be very small, as we found a 10-element array ([8,2,1,2,0,8,5,8,3,7]) is. sufficient to cover all of V. We note that an earlier version of quicksort we tried lacked primitive operations to directly move a pointer to another, and therefore needed more functions and observa tions. As this complexity interfered with determining the base cases and reductions, we changed the algorithm to its current form. Even though the earlier version also generalized just as well in prac tice, relatively small differences in the formulation of the traces and the environment observations can drastically change the difficulty of verification..\nLength of Array Non-Recursive Recursive\nength of Array Non-Recursive Recursive 2 100% 100% 3 100% 100% 4 100% 100% 5 100% 100% 6 90% 100% 7 86.7% 100% 8 6.7% 100% 9 0% 100% 10 0% 100% 12 0% 100% 15 0% 100% 70 0% 100%\nBy reasoning about the possible set of environment observations created by all valid inputs, we construct V using the procedure described in Section 3.3] Using this modification, we constructed a verification set consisting of one array of size 10."}] |
ryQbbFile | [{"section_index": "0", "section_name": "CAN AI GENERATE LOVE ADVICE?: TOWARD NEURAL ANSWER GENERATION FOR NON-FACTOID OUESTIONS", "section_text": "Makoto Nakatsuji, Hisashi Ito, Naruhiro Ikeda, Shota Sagara & Akihisa Fujit NTT Resonant Inc.\n{nakatuji,h-ito, nikeda, s-sagara, akihisa}@nttr.co. jp"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Recently, dialog-based natural language understanding systems such as Apple's Siri, IBM's Watson. Amazon's Echo, and Wolfram Alpha have spread through the market. In those systems, Questior Answering (QA) modules are particularly important since people want to know many things ir. their daily lives. Technically, there are two types of questions in QA systems: factoid questions. and non-factoid ones. The former are asking, for instance, for the name of a person or a locatior such that \"What/Who is X?\". The latter are more diverse questions which cannot be answered by a. short fact. They range from advice on making long distance relationships work well, to requests fol opinions on some public issues. Significant progress has been made at answering factoid questions (Wang et al. (2007); Yu et al. (2014)), however, retrieving answers for non-factoid questions fron. the Web remains a critical challenge in improving QA modules. The QA community sites such as Yahoo! Answers and Quora can be sources of training data for the non-factoid questions where the goal is to automatically select the best of the stored candidate answers.."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Deep learning methods that extract answers for non-factoid questions from QA. sites are seen as critical since they can assist users in reaching their next decisions through conversations with AI systems. The current methods, however, have the. following two problems: (1) They can not understand the ambiguous use of words. in the questions as word usage can strongly depend on the context (e.g. the word. \"relationship' has quite different meanings in the categories of Love advice and. other categories). As a result, the accuracies of their answer selections are not. good enough. (2) The current methods can only select from among the answers. held by QA sites and can not generate new ones. Thus, they can not answer the. questions that are somewhat different with those stored in QA sites. Our solution,. Neural Answer Construction Model, tackles these problems as it: (1) Incorporates. the biases of semantics behind questions (e.g. categories assigned to questions). into word embeddings while also computing them regardless of the semantics. As. a result, it can extract answers that suit the contexts of words used in the question. as well as following the common usage of words across semantics. This improves. the accuracy of answer selection. (2) Uses biLSTM to compute the embeddings. of questions as well as those of the sentences often used to form answers (e.g.. sentences representing conclusions or those supplementing the conclusions). It. then simultaneously learns the optimum combination of those sentences as well as. the closeness between the question and those sentences. As a result, our model can. construct an answer that corresponds to the situation that underlies the question;. it fills the gap between answer selection and generation and is the first model. to move beyond the current simple answer selection model for non-factoid QAs.. Evaluations using datasets created for love advice stored in the Japanese QA site. Oshiete goo, indicate that our model achieves 20 % higher accuracy in answer. creation than the strong baselines. Our model is practical and has already been. applied to the love advice service in Oshiete goo..\nFamily Love advice My son is starting to get homework. Should family Willdistancer relationship ruin love? help him with it or let him figure it out on his own? Should I be close to lovers all the time? (a) Computing word embeddings biased with semantics. Loss function for selection and combination of sentences. Max pooling Max pooling Max pooling Will distance relationship ruin love Distance cannot ruin true love Distance certainly tests your love Question q Conclusion a. Supplement a (b) Neural network that selects and combines generate answers\nFamily Love advice\nFigure 1: Main ideas: (a) word embeddings with semantics and (b) a neural answer construction\nRecent deep learning methods have been applied to this non-factoid answer selection task usin datasets stored in the QA sites resulting in state-of-the-art performance (Yu et al. (2014); Tan et a (2015); Qiu & Huang (2015); Feng et al. (2015); Wang & Nyberg (2015); Tan et al. (2016)). The usually compute closeness between questions and answers by the individual embeddings obtaine using a convolutional model. For example, Tan et al. (2016) builds the embeddings of questions an those of answers based on bidirectional long short-term memory (biLSTM) models, and measure their closeness by cosine similarity. It also utilizes an efficient attention mechanism to generat the answer representation following the question context. Their results show that their model ca achieve much more accurate results than the strong baseline (Feng et al. (2015)). The current meth ods, however, have the following two problems when applying them to real applications:\nTo solve the above problems, this paper proposes a neural answer construction model; it fills the gap. between answer selection and generation and is the first model to move beyond the current simple. answer selection model for non-factoid QAs. It extends the above mentioned biLSTM model since it is language independent and free from feature engineering, linguistic tools, or external resources.. Our model takes the following two ideas:.\n(1) Before learning answer creation, it incorporates semantic biases behind questions (e.g. titles or categories assigned to questions) into word vectors while computing vectors by using QA documents stored across semantics. This process emphasizes the words that are important for a certain context As a result, it can select the answers that suit the contexts of words used in the questions as well as the common usage of words seen across semantics. This improves the accuracies of answer selections\n(1) They can not understand the ambiguous use of words written in the questions as words are used in quite different ways following the context in which they appear (e.g. the word \"relationship\"' used in a question submitted to \"Love advice\"' category is quite different from the same word submit. ted to \"Business advice'\" category). This makes words important for a specific context likely to be disregarded in the following answer selection process. As a result, the answer selection accuracies become weak for real applications.\n2) They can only select from among the answers stored in the QA systems and can not generate new ones. Thus, they can not answer the questions that are somewhat different from those stored in the QA systems even though it is important to cope with such differences when answering non- factoid questions (e.g. questions in the \"Love advice\"' category are often different due to the situ- ation and user even though they share the same topics.). Furthermore, the answers selected from QA datasets often contain a large amount of unrelated information. Some other studies have tried to create short answers to the short questions often seen in chat systems (Vinyals & Le (2015); Serban et al. (2015)). Our target, non-factoid questions in QA systems, are, however, much longer and more complicated than those in chat systems. As described in their papers, the above methods, unfortunately, create unsatisfying answers to such non-factoid questions.\nFor example, in Fig. 1-(a), there are two questions in category \"Family\"' and \"Love advice\". Words. marked with rectangles are category specific (i.e. \"son' and \"homework\"' are specifically observec. in \"Family\" while \"distance\", \"relationship\", and \"lovers\"' are found in \"Love advice\"'.) Our method. can emphasize those words. As a result, answers that include the topics, \"son'' and \"homework', O. topics, \"distance\", \"relationship\", and \"lovers\", will be scored highly for the above questions in the. following answer selection task.\n(2) The QA module designer first defines the abstract scenario of answer to be created; types of sentences that should compose the answer and their occurrence order in the answer (e.g. typical answers in \"Love advice\" are composed in the order of the sentence types \"sympathy\", \"conclu- sion', \"supplementary for conclusion', and \"encouragement'). The sentence candidates can be ex- tracted from the whole answers by applying sentence extraction methods or sentence type classifiers (Schmidt et al. (2014); Zhang et al. (2008); Nishikawa et al. (2010); Chen et al. (2010). It next si- multaneously learns the closeness between questions and sentences that may include answers as well as combinational optimization of those sentences. Our method also uses an attention mechanism to generate sentence representations according to the prior sentence; this extracts important topics in the sentence and tracks those topics in subsequent sentences. As a result, it can construct answers that have natural sentence flow whose topics correspond to the questions. Fig. 1-(b) explains the proposed neural-network by using examples. Here, the QA module designer first defines the ab- stract scenario for the answer as in the order of \"conclusion\"' and \"supplement\"'. Thus, there are three types of inputs \"question\", \"conclusion', and \"supplement\"'. It next runs biLSTMs over those inputs separately; it learns the order of word vectors such that \"relationships\"' often appears next to \"distance'. It then computes the embedding for the question, that for conclusion, and that for supple- ment by max-pooling over the hidden vectors output by biLSTMs. Finally, it computes the closeness between question and conclusion, that between question and supplement, and combinational opti- mization between conclusion and supplement with the attention mechanism, simultaneously (dotted lines in Fig. 1-(b) represent attention from conclusion to supplement).\nWe evaluated our method using datasets stored in the Japanese QA site Oshiete goo'. In particular our evaluations focus on questions stored in the \"Love advice' category since they are representative non-factoid questions: the questions are often complicated and most questions are very long. The results show that our method outperforms the previous methods including the method by (Tan et al (2016)); our method accurately constructs answers by naturally combining key sentences that are highly close to the question."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "Previous works on answer selection normally require feature engineering, linguistic tools, or ex ternal resources. Recent deep learning methods are attractive since they demonstrate superior per formance compared to traditional machine learning methods without the above mentioned tiresome procedures. For example, (Wang & Nyberg (2015); Hu et al. (2014)) construct a joint feature vec. tor on both question and answer and then convert the task into a classification or ranking prob. lem. (Feng et al. (2015); Yu et al. (2014); dos Santos et al. (2015); Qiu & Huang (2015)) learn the question and answer representations and then match them by certain similarity metrics. Recently Tan et al. (2016) took the latter approach and achieved more accurate results than the current strong baselines (Feng et al. (2015); Bendersky et al. (2011)). They, however, can only select answers and not generate them. Other than the above, recent neural text generation methods (Serban et al. (2015); Vinyals & Le (2015)) can also intrinsically be used for answer generation. Their evaluations showed that they could generate very short answer for factoid questions, but not the longer and more com- plicated answers demanded by non-factoid questions. Our Neural Answer Construction Model fills the gap between answer selection and generation for non-factoid QAs. It simultaneously learns the closeness between questions and sentences that may include answers as well as combinational op. timization of those sentences. Since the sentences themselves in the answer are short, they can be generated by neural conversation models like (Vinyals & Le (2015));\nAs for word embeddings with semantics, some previous methods use the semantics behind words by using semantic lexicons such as WordNet and Freebase (Xu et al. (2014); Bollegala et al. (2016) Faruqui et al. (2015); Johansson & Nieto Pina (2015)). They, however, do not use the semantics be\nHere, we explain QA-LSTM (Tan et al. (2015)), the basic discriminative framework for answer se lection based on LSTM. since we base our ideas on its framework.\nWe first explain the LSTM and introduce the terminologies used in this paper. Given input sequence. X = {x(1), x(2), .. . , x(N) }, where x(t) is t-th word vector, t-th hidden vector h(t) is updated as\nThere are three gates (input it, forget ft, and output ot), and a cell memory vector ct. is the sigmoid function. W E RHN, U E RHH, and b E RH1 are the network parameters to be learned. Single-direction LSTMs are weak in that they fail to make use of the contextual information from the future tokens. BiLSTMs use both the previous and future context by processing the sequence in two directions, and generate two sequences of output vectors. The output for each token is the\nconcatenation of the two vectors from both directions. i.e. h(t) Eh.(t h(t)\nIn the QA-LSTM framework, given input pair (q, a) where q is a question and a is a candidate. answer, it first retrieves the word embeddings (WEs) of both q and a. Next, it separately applies a biLSTM over the two sequences of WEs. Then, it generates fixed-sized distributed vector repre-. sentations og for q (or oa for a) by computing max pooling over all the output vectors and then concatenating the resulting vectors on both directions of the biLSTM. Finally, it uses cosine simi- larity cos(og, Og) to score the input (q, a) pair..\nIt then defines the training objective as the hinge loss of\nwhere of is an output vector for ground truth answer, o- is that for an incorrect answer randomly chosen from the entire answer space, and M is a margin. It treats any question with more than. one ground truth as multiple training examples. Finally, batch normalization is performed on the representations before computing cosine similarity (Ioffe & Szegedy (2015))."}, {"section_index": "4", "section_name": "4.1 WORD EMBEDDINGS WITH DOCUMENT SEMANTICS", "section_text": "First, we explain paragraph2vec model. It averages the paragraph vector with several word vectors from a paragraph and predicts the following word in the given context. It trains both word vectors and paragraph vectors by stochastic gradient descent and backpropagation (Rumelhart et al. (1988)) While paragraph vectors are unique among paragraphs, the word vectors are shared.\nNext, we introduce our method that incorporates the semantics behind QA documents into word embeddings (WEs) in the training phase. The idea is simple. Please see Fig. 2. It averages the vector\nhind the question/answer documents; e.g. document categories. Thus, they can not well catch the contexts in which the words appear in the QA documents. They also require external semantic re-. sources other than QA datasets.\nit o(W;x(t) + U,h(t-1) + bi) ft o(Wgx(t) + Ugh(t-1)+bf Ot o(W,x(t) + U,h(t-1) + bo) Ct tanh(Wcx(t) + U.h(t- 1) + bc) Ct it * Ct+ ft * Ct-1 h(t) Ot * tanh(ct)\nThis process is inspired by paragraph2vec (Le & Mikolov (2014)); an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents.\nFigure 2: Learning word vectors biased with semantics\nof category token and the vectors of title tokens, which are assigned to the QA documents, witl several of the word vectors present in those documents. It then predicts the following word in th given context. Here, title tokens are defined by nouns that are extracted from titles assigned to th question. Multiple title tokens can be extracted from a title while one category token is assigned to : question. Those tokens are shared among datasets in the same category. It trains the category vecto and title vectors as well as word vectors in QA documents as per paragraph2vec model. Thos additional vectors are used as semantic biases for learning WEs. They are useful in emphasizin the words following the contexts of particular categories or titles. This improves the accuracies o answer selection described later as explained in Introduction.\nFor example, in Fig. 2, it can incorporate semantic biases from category \"Love advice\"' into the words (e.g. \"Will, \"distance\", \"relationship\", \"ruin', \"love\" and so on) in the question in \"Love advice\". Thus, it can well apply the biases from category \"Love advice' to the words (e.g. \"distance' and \"relationship\") if they specifically appear in \"Love advice\"'. On the other hand, words that appear in several categories (e.g. \"will') are biased with several categories and thus will not be emphasized.\nHere, we explain our model. We first explain our approach and then the algorithm\nApproach It takes the following three approaches\nlove ^ average Category token Title token Will distance relationship ruin\n. Design the abstract scenario for the answer: The answer is constructed according to the. order of the sentence types defined by the designer. For example, there are the sentence types such as sentence that states sympathy with the question, sentence that states a con- clusion to the question, sentence that supplements the conclusion, and sentence that states. encouragement to the questioner. This is inspired by the automated web service composi-. tion framework (Rao & Su (2005)) where the requester should build an abstract process be-. fore the web service composition planning starts. In our setting, the process is the scenario. of answer and the service is the sentence in the scenario. Thus, our method can construct an answer by binding concrete sentences to fit the scenario.. For example, the scenario for love advice can be designed as follows: it begins with a. sympathy sentence (e.g. \"You are struggling too.\"), next it states a conclusion sentence. (e.g. \"I think you should make a declaration of love to her as soon as possible.'), then it. supplements the conclusion by a supplemental sentence (e.g. \"If you are too late, she maybe. fall in love with someone else.'), and finally it ends with an encouragement sentence (e.g. \"Good Luck!). Joint neural network to learn sentence selection and combination: Our model com- putes the combination optimization among sentences that may include the answer as well. as the closeness between question and sentences within a single neural network. This im-. proves answer sentence selection; our model can avoid the cases in which the combination. of sentences are not good enough though the scores of closeness between the question and each sentence are high. It also can let the parameter tuning simpler than the model that sep-. arates the network for sentence selection and that for sentence combination. The image of this neural-network is depicted in Fig. 1-(b). Here, it learns the closeness between sentence. \"Will distance relationship ruin love? and \"Distance cannot ruin true love\", the closeness. between \"Will distance relationship ruin love?\" and \"Distance certainly tests your love.\". and the combination between \"Distance cannot ruin true love' and \"Distance certainly tests VOurlove\nAlgorithm 1 A neural answer construction model\nAlgorithm 1 A neural answer construction model Input: Pairs of question, conclusion, and supplement, {(q, ac, and as)} Output: Parameters set by the algorithm. 1: for n = 1, n++, while n < N do 2: for each pair (q, ac, as) do 3: Computes o, and o, by biLSTMs and max pooling 4: Computes o by biLSTM and max pooling. 5: for each t-th hidden vector for supplement do 6: Computes h,(t) by Eq. (1). 7: end for 8: Computes os by max pooling 9: Computes L by Eq. (2). 10: end for 11: end for\nProcedure The core part of the answer is usually the conclusion sentence and its supplementa. sentence. Thus, for simplicity, we here explain the procedure of our model in selecting and con bining the above two types of sentences. As the reader can imagine, it can easily be applied to fou. sentence types. Actually, our love advice service by AI in oshiete-goo was implemented for fou. types of sentences, sympathy, conclusion, supplement, and encouragement (see Evaluation section. The model is illustrated in Fig. 1-(b) in which the input pair is (q, ac, as) where q is the question, a. is a candidate conclusion sentence, and as is a candidate supplemental sentence. The word embec dings (WEs) for words in q, ac, and as are extracted in the way described in the previous subsectior. The procedure of our model is as follows (please see the Algorithm 1 also.):.\n(1) It iterates the following procedures (2) to (7) N times (line 1 in the algorithm). (2) It picks up each pair (q, ac, and as) in the dataset (line 2 in the algorithm)\nIn the following steps (3) and (4), the same biLSTM is applied to both q and ac to compute the closeness between q and ac. Similarly, the same biLSTM is applied to both q and as. However, the biLSTM for computing closeness between q and ac differs from that between q and as since ac and as have different characteristics..\nAttention mechanism to improve the combination of sentences : Our method extracts important topics in the conclusion sentence and emphasizes those topics in the supplemen tal sentence in the training phase; this is inspired by (Tan et al. (2016)) who utilizes an. attention mechanism to generate the answer representation following the question context. As a result, it can combine conclusions with the supplements following the contexts writ- ten in the conclusion sentences. This makes the story in the created answers very natural.. In Fig. 1-(b), our attention mechanism extracts important topics (e.g. topic that represents. \"distance') in the conclusion sentence \"Distance cannot ruin true love\"' and emphasizes. those topics in computing the representation of the supplement sentence \"Distance cer-. tainly tests your love.\"..\n(3) It separately applies a biLSTM over the two sequences of WEs, q and ac, and computes the max pooling over the t-th hidden vector for question h(t) and that for conclusion hc(t). As a result, it acquires the question embedding, of and the conclusion embedding, oc (line 3 in the algorithm).\n4) It also separately applies a biLSTM over the two sequences of WEs, q and as, and computes the nax pooling over the t-th hidden vector for question hs(t) to acquire the question embedding, o line 4 in the algorithm). os is different from of since our method does not share the sub-network used for computing closeness between q and ac and that between q and as as described above\nOA-LSTM Attentive-LSTM Semantic-LSTM Construction Our method K=1 0.8472 0.8196 0.8499 0.8816 0.8846 K=3 0.8649 0.844566 0.8734 0.8884 0.8909 K=5 0.8653 0.8418 0.8712 0.8827 0.8845 K=10 0.8603 0.8358 0.8658 0.8618 0.8647\ntanh(Wsmhs(t) + WcmOc exp(wmb I ms n hs(t\nWsm, Wcm, and wmb are attention parameters. Conceptually, the attention mechanism gives mor weights on words that include important topics in the conclusion sentence..\nL max{0, M (cos(0q, [o, o])-cos(0q, [o, o]))} + max{0, M-(cos(0q, [o, o])-cos(0q, [oc, o]))} + max{0, (1 + k)M (cos(0q, [o, o])-cos(0q, [oc, o]))} + max{0, M(cos(0q, [o, o-])-cos(0q, [oc, o]))} + max{0, M (cos(0q,[oc,o])cos(0q, [oc, o]))}\nwhere [y,z] is the concatenation of two vectors, y and z, oq is [og, og], o+ is an output vector. for a ground truth answer, and o- is that for an incorrect answer randomly chosen from the entire answer space. In the above equation, the first (or second) term presents the loss that occurs when both question-conclusion pair (q-c) and question-supplemental pair (q-s) are correct while q-c (or. q-s) is correct but q-s (or q-c) is incorrect. The third term computes the loss that occurs when both. q-c and q-s are correct while both q-c and q-s are incorrect. The fourth (or fifth) term computes the. loss that occurs when q-c (or q-s) is correct but q-s (or q-c) is incorrect while both q-c and q-s are incorrect. M is constant margin and k (0< k < 1) is a parameter controlling the margin. Thus, the resulting margin for the third term is larger than those for other terms. In this way, by considering the case when either conclutions or supplements are incorrect or not, this equation optimizes the combinations among conclusion and supplement. In addition, it can take the closeness between question and conclusion (or supplement) in consideration by cosine similarity..\nThe parameter sets {Wi,W f,Wo,Wc,U,Uf,Uo,Uc,bi,bf,bo,bc}c for question-conclusion matching, {W,,W f,Wo,Wc,U,Uf,Uo,Uc,bi,bf,bo,bc}s for question-supplement matching and {Wsm, Wcm,Wmb} for conclusion-supplement attention are trained during the iterations. Af- ter the model is trained, our method uses cos(oq, [oc, os]) to score the input (q, ac, as) pair and constructs an answer that has a conclusion and its supplement.\nWe used our method to select or construct answers to the questions stored in \"Love advice' category\nTable 1: Comparison of AP for answer selection\nTable 2: Comparison of AP for answer construction\n(7) It computes the closeness between question and conclusion and that between question and sup plement as well as the optimization combination between conclusion and supplement. The training objective is given as (line 9 in the algorithm):.\nmax{0, M(cos(0q, [o, o])-cos(0q, [o, o-D)} + max{0, M-(cos(0q, [o, o])-cos(0q, [oc, oD))} + max{0, (1 + k)M (cos(0q, [o, o])-cos(0q, [0c, o]))] + max{0, M(cos(oq ,[0T,o])cos(0q,[oc,o]))} max{0. M-(cos(o. COs(O. [o..o.D}\nTable 3: Comparison of human evaluation results\nTable 4: Examples of answers created by QA-LSTM and those by Our method"}, {"section_index": "5", "section_name": "5.1 DATASET", "section_text": "We evaluated our method using a dataset stored in Japanese online QA service Oshiete-goo. First. the word embeddings were built by using 189,511 questions and their 771,956 answers stored in. 16 categories including \"Love Advice\"', \"Traveling\", and \"Health Care\". 6,250 title tokens were. extracted from the titles. Then, we evaluated answer selection and construction tasks by using a cor- pus containing about 5,o00 question-conclusion-supplement sentences. Conclusions and supplement. sentences were extracted by human experts from answers. The readers could use sentence extraction methods (Schmidt et al. (2014); Zhang et al. (2008); Nishikawa et al. (2010); Chen et al. (2010)) or neural conversation models like (Vinyals & Le (2015)) to semi-automatically extract/generate those sentences.\nWe compared the accuracy of the following five methods"}, {"section_index": "6", "section_name": "5.3 METHODOLOGY AND PARAMETER SETUP", "section_text": "We randomly divided the dataset into two halves, training dataset and predicted one, and conducte two-fold cross validation. Results shown later are the average values.\nBoth for answer selection and construction, we used Average Precision (AP) against the top-K ranked answers in the results because we consider that the most highly ranked answers are im portant for users. If the number of ranked items is K, the number of correct answers among the top- ranked items N,, and the number of all correct answers (paired with the questions) D, AP is defined as follows:\n1 Nj AP D j 1<j<K\nWe tried word vectors and qa vectors of different sizes, and finally set the word vector size to 300. and the LSTM output vectors for biLSTMs to 50 2. We also tried different margins in the hinge\nQuestions Answers created by QA-LSTM Answers created by Our method. I'm afraid to confess my love to her, what. You should wait until you feel excited. If you.. It is better to concentrate on how to confess should I do?. interact with her indifferently, it will be diffi-. your love to her. I understand you are strug. cult to develop any relation with her.. gling since you love her very much.. A guy I like says to me \"I like you at home'. You don't have to test his love immediately.. Yes, there is some hope. You can understand. kiddingly. It may be the ordinary gentleness.. Unless he likes you, he would not have gone. his reaction more easily if your understanding. Some hope? to see a movie with you.. of each other is deeper.. I seldom meet an interesting person. I worry. Try to select your words correctly. Unless you. You should confess your love to him. Unless. about how to become close to him. Should I. confess your love to him, it is difficult to con-. you confess your love to him, it is difficult tc. approach to him positively?. vey your emotion to him. convey your emotion to him..\nQA-LSTM proposed by (Tan et al. (2015)) Attentive LSTM: introduces an attention mechanism from question to answer and is evalu- ated as the current best answer selection method Tan et al. (2016). Semantic LSTM: performs answer selection by using our word embeddings biased with semantics. . Construction: performs our proposed answer construction without attention mechanism. . Our method: performs our answer construction with attention mechanism from conclusion to supplement.\nFor answer construction, we checked whether each method could recreate the original answers. As the reader easily can understand. this is a much more difficult task than answer selection and thus the values of AP will be smaller than the results for answer selection\nloss function, and fixed the margin, M, to 0.2 and k to 1.0. The iteration count N was set to 20. For. our method, the embeddings for questions, those for conclusions, and those for supplements were pretrained by Semantic LSTM before answer construction since this enhances the overall accuracy..\nWe did not use attention mechanism from question to answer for Semantic LSTM, Construction anc Our method. This is because, as we present in the results subsection, the lengths of questions are much longer than those of answer sentences, and thus the attention mechanism from question tc answer became noise for sentence selection.\nAnswer Construction We then compared the accuracy of the methods for answer construction. Especially for the answer construction task, the top-1 result is most important since many QA ap. plications show only the top-1 answer. The results are shown in Table 2. There is no answer con-. struction mechanism in QA-LSTM, Attentive-LSTM, and Semantic-LSTM. Thus we simply merge. the conclusion and supplement, each of which has the highest similarity with the question by each. method. OA-LSTM and Attentive LSTM are much worse than Semantic-LSTM. This is because the sentences output by Semantic-LSTM are selected by utilizing the words that are emphasized for a. context for \"Love advice\" (i.e. category and titles). Construction is better than Semantic-LSTM since. it simultaneously learns the optimum combination of sentences as well as the closeness between the. question and sentences. Finally, Our method is better than Construction. This is because it well. employs the attention mechanism to link conclusion and supplement sentences and thus the com-. binations of the sentences are more natural than those of Construction. Our method achieved 20%. higher accuracy than OA-LSTM (OA-LSTM marked O.3262 while Our method marked O.3901.).\nThe computation time for our method was less than two hours. All experiments were performed on NVIDIA TITAN X/Tesla M40 GPUs, and all methods were implemented by Python in the Chainer framework. Thus, our method well suits real applications. In fact, it is already being used in the love advice service of Oshiete goo 2.\nHuman evaluation The outputs of QA-LSTM and Our method were judged by two human ex. perts. The experts entered the questions, which were not included in our evaluation datasets, to the AI system and rated the created answers based on the following scale: (1) the conclusion and supple. ment sentences as well as their combination were good, (2) the sentences were good in isolation bu1 their combination was not good, (3) One of the selections (conclusion or supplement) was good bu1. their combination was not good, and (4) both sentences and their combination were not good. The. answers were judged as good if they satisfied the following two points: (A) the contents of answer sentences correspond to the question. (B) the story between conclusion and supplement is natural..\nThe results are shown in Table 3. Table 4 presents examples of the questions and answers constructed. (they were originally Japanese and translated into English for readability. The questions are sum-. marized since the original ones were very long.). The readers can also see Japanese answers from our service URL presented the above. Those results indicate that the experts were much more satis-. fied with the outputs of Our method than those by QA-LSTM; 58 % of the answers created by Our. method were classified as (1). This is because. as can be see in Table 4. Our method can naturally\nAnswer Selection We first compare the accuracy of methods for answer selection. The results are. shown in Table 1. OA-LSTM and Attentive LSTM are worse than Semantic-LSTM. This indicates that Semantic-LSTM can incorporate semantic information (titles/categories) into word embeddings; it. can emphasize words according to the context they appeared and thus the matching accuracy be-. tween question vector and conclusion (supplement) vector was improved. Attentive LSTM is worse. than OA-LSTM as described above. Construction and Our method are better than Semantic-LSTM This is because they can avoid the combinations of sentences that are not good enough even though. the scores of closeness between questions and sentences are high. This implies that, if the com-. bination is not good, the selection of answer sentences also tends to be erroneous. Finally, Our. method, which provides sophisticated selection/combination strategies, yielded higher accuracy than. the other methods. It achieved 4.4% higher accuracy than QA-LSTM (QA-LSTM marked O.8472 while Our method marked 0.8846.).\ncombine the sentences as well as select sentences that match the question. It well coped with the questions that were somewhat different from those stored in the evaluation dataset..\nActually, when the public used our love advice service, it was surprising to find that the 455 answers created by the AI whose name is oshi-el (uses Our method) were judged as Good answers by users from among the 1,492 questions entered from September 6th to November 5th3. The rate of getting Good answers by oshi-el is twice that of the average human user in oshiete-goo when we focus on users who answered more than 100 questions in love advice category. Thus, we think this is a good result."}, {"section_index": "7", "section_name": "6 CONCLUSION", "section_text": "This is the first study that create answers for non-factoid questions. Our method incorporates the. biases of semantics behind questions into word embeddings to improve the accuracy of answer. selection. It then simultaneously learns the optimum combination of answer sentences as well as the closeness between questions and sentences. Our evaluation shows that our method achieves 20 % higher accuracy in answer construction than the method based on the current best answer selection method. Our model presents an important direction for future studies on answer generation. Since. the sentences themselves in the answer are short, they can be generated by neural conversation models like (Vinyals & Le (2015)); this means that our model can be extended to generate complete. answers once the abstract scenario is made.."}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "Michael Bendersky, Donald Metzler, and W. Bruce Croft. Parameterized concept weighting in ver bose queries. In Proc. S1GIR'11, pp. 605-614, 2011.\nManaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith Retrofitting word vectors to semantic lexicons. In Proc. NAACL HLT'15, pp. 1606-1615, 2015.\nMinwei Feng, Bing Xiang, Michael R. Glass, Lidan Wang, and Bowen Zhou. Applying deep learn ing to answer selection: A study and an open task. CoRR, abs/1508.01585, 2015.\nBaotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. Convolutional neural network architecture for matching natural language sentences. In Proc. NIPS'14, pp. 2042-2050. 2014.\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proc. 1CML'15. volume 37. pp. 448-456. 2015\nRichard Johansson and Luis Nieto Pina. Embedding a semantic network in a word space. In Pro NAACL HLT'15, pp. 1428-1433, 2015\nQuoc V. Le and Tomas Mikolov. Distributed representations of sentences and documents. In Proc ICML'14, pp. 1188-1196, 2014\n3This service started on September 6th, 2016\nIitoshi Nishikawa, Takaaki Hasegawa, Yoshihiro Matsuo, and Genichiro Kikui. Opinion summa- rization with integer linear programming formulation for sentence extraction and ordering. In. Proc. COLING'10, pp. 910-918, 2010.\nDavid E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. Neurocomputing: Foundations of research. chapter Learning Representations by Back-propagating Errors, pp. 696-699. 1988\nSebastian Schmidt, Steffen Schnitzer, and Christoph Rensing. Domain-independent sentence type classification: Examining the scenarios of scientific abstracts and scrum protocols. In Proc. i KNOW '14, pp. 5:1-5:8, 2014.\nOriol Vinyals and Quoc V. Le. A neural conversational model. CoRR, abs/1506.05869, 2015.\nMengqiu Wang, Noah A. Smith, and Teruko Mitamura. What is the jeopardy model? a quasi synchronous grammar for qa. In Proc. EMNLP-CoNLL'07, pp. 22-32, 2007.\nLei Yu, Karl Moritz Hermann, Phil Blunsom, and Stephen Pulman. Deep learning for answer sen tence selection. CoRR, abs/1412.1632, 2014.\nMing Tan, Cicero Nogueira dos Santos, Bing Xiang, and Bowen Zhou. Improved representation learning for question answer matching. In Proc. ACL'16, pp. 464- 473, 2016.\nJiajun Zhang, Chengqing Zong, and Shoushan Li. Sentence type based reordering model for statisti. cal machine translation. In Proceedings of the 22Nd International Conference on Computationa. Linguistics - Volume 1, pp. 1089-1096, 2008"}] |
Bk0FWVcgx | [{"section_index": "0", "section_name": "INTRODUCTION", "section_text": "Optimization is a critical component in deep learning, governing its success in different areas of. computer vision, speech processing and natural language processing. The prevalent optimization. strategy is Stochastic Gradient Descent, invented by Robbins and Munro in the 50s. The empirical performance of SGD on these models is better than one could expect in generic, arbitrary non-convex loss surfaces, often aided by modifications yielding significant speedupsDuchi et al.(2011); Hinton. et al.(2012);Ioffe & Szegedy(2015); Kingma & Ba(2014). This raises a number of theoretical questions as to why neural network optimization does not suffer in practice from poor local minima..\nCurrently on leave from UC Berkeley"}, {"section_index": "1", "section_name": "TOPOLOGY AND GEOMETRY OF HALF-RECTIFIED NETWORK OPTIMIZATION", "section_text": "Joan Bruna *\nCourant Institute of Mathematical Sciences New York University New York. NY 10011. USA"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "The loss surface of deep neural networks has recently attracted interest in the optimization and ma-. chine learning communities as a paradigmatic example of a hard, high-dimensional, non-convex. problem. Recent work has explored models from statistical physics such as spin glasses Choroman-. ska et al.(2015), in order to understand the macroscopic properties of the system, but at the expense. of strongly simplifying the nonlinear nature of the model. Other authors have advocated that the real. danger in high-dimensional setups are saddle points rather than poor local minima Dauphin et al.. (2014), although recent results rigorously establish that gradient descent does not get stuck on saddle. points Lee et al.[(2016) but merely slowed down. Other notable recent contributions are Kawaguchi. (2016), which further develops the spin-glass connection from [Choromanska et al.[(2015) and re-. solves the linear case by showing that no poor local minima exist: Sagun et al.(2014) which also\ndiscusses the impact of stochastic vs plain gradient, Soudry & Carmon (2016), that studies Empir. ical Risk Minimization for piecewise multilayer neural networks under overparametrization (whicl needs to grow with the amount of available data), and Goodfellow et al.(2014), which provided in sightful intuitions on the loss surface of large deep learning models and partly motivated our work. Additionally, the work Safran & Shamir (2015) studies some topological properties of homogeneous nonlinear networks and shows how overparametrization acts upon these properties, and the pioneer. ing Shamir (2016) studied the distribution-specific hardness of optimizing non-convex objectives. Lastly, several papers submitted concurrently and independently of this one deserve note, particu larly Swirszcz et al.(2016) which analyzes the explicit criteria under which sigmoid-based neura. networks become trapped by poor local minima, as well as|Tian[(2017), which offers a complemen tary study of two layer ReLU based networks, and their learning dynamics..\nIn this work, we do not make any linearity assumption and study conditions on the data distribution and model architecture that prevent the existence of bad local minima. The loss surface F(0) of a given model can be expressed in terms of its level sets x, which contain for each energy level. X all parameters 0 yielding a loss smaller or equal than X. A first question we address concerns the topology of these level sets, i.e. under which conditions they are connected. Connected level. sets imply that one can always find a descent direction at each energy level, and therefore that no poor local minima can exist. In absence of nonlinearities, deep (linear) networks have connected. level sets Kawaguchi|(2016). We first generalize this result to include ridge regression (in the two layer case) and provide an alternative, more direct proof of the general case. We then move to the. half-rectified case and show that the topology is intrinsically different and clearly dependent on the. interplay between data distribution and model architecture. Our main theoretical contribution is to. prove that half-rectified single layer networks are asymptotically connected, and we provide explicit bounds that reveal the aforementioned interplay..\nBeyond the question of whether the loss contains poor local minima or not, the immediate follow-up question that determines the convergence of algorithms in practice is the local conditioning of the. loss surface. It is thus related not to the topology but to the shape or geometry of the level sets. As the energy level decays, one expects the level sets to exhibit more complex irregular structures. which correspond to regions where F() has small curvature. In order to verify this intuition, we. introduce an efficient algorithm to estimate the geometric regularity of these level sets by approx. imating geodesics of each level set starting at two random boundary points. Our algorithm uses. dynamic programming and can be efficiently deployed to study mid-scale CNN architectures on. MNIST, CIFAR-10 and RNN models on Penn Treebank next word prediction. Our empirical results. show that these models have a nearly convex behavior up until their lowest test errors, with a single. connected component that becomes more elongated as the energy decays. The rest of the paper is. structured as follows. Section 2 presents our theoretical results on the topological connectedness. of multilayer networks. Section 3 presents our path discovery algorithm and Section 4 covers the. numerical experiments.\nLet P be a probability measure on a product space ' V, where we assume I' and V are Euclidean vector spaces for simplicity. Let {(xi, yi)}i be an iid sample of size L drawn from P defining the. training set. We consider the classic empirical risk minimization of the form.\nOur setup considers the case where R consists on either l1 or l2 norms, as we shall describe belov They correspond to well-known sparse and ridge regularization respectively\nL 1 L l=1\nwhere (x; 0) encapsulates the feature representation that uses parameters 0 e RS and R(0) is a. regularization term. In a deep neural network, 0 contains the weights and biases used in all layers For convenience, in our analysis we will also use the oracle risk minimization:.\nFo(0) =E(x,Y)~p|(X;0) -Y|2 + kR(0).\nThe first question we study is the structure of critical points of Fe(0) and F,(0) when is a mul. tilayer neural network. For simplicity, we consider first a strict notion of local minima: 0 E RS is. a strict local minima of F if there is e > 0 with F(0) > F(0) for all 0' E B(0,e) and 0' 0 In particular, we are interested to know whether Fe has local minima which are not global minima This question is answered by knowing whether F(X) is connected at each energy level X:.\nProposition 2.1. If F() is connected for all X then every local minima of F(0) is a global minima\nStrict local minima implies that VF(0) = 0 and HF(0) > 0, but avoids degenerate cases where F is constant along a manifold intersecting 0. In that scenario, if Ue denotes that manifold, oui reasoning immediately implies that if () are connected, then for all e > 0 there exists 0' with dist(0',Ue) < e and F(0') < F(0). In other words, some element at the boundary of Ue must be a. saddle point. A stronger property that eliminates the risk of gradient descent getting stuck at Ue is. that all elements at the boundary of Ue are saddle points. This can be guaranteed if one can show. that there exists a path connecting any 0 to the lowest energy level such that F is strictly decreasing. along it.\nSuch degenerate cases arise in deep linear networks in absence of regularization. If 0 (W1,..., Wk) denotes any parameter value, with N1,... Nk denoting the hidden layer sizes, and Fr E GL. (R) are arbitrary elements of the general linear group of invertible N N matrices. with positive determinant, then\nUe ={WiF-, FiW2F-',...,FKWK ; Fk E GL, (R)}\nWe first consider the particularly simple case where F' is a multilayer network defined by\n(x;0)=Wk...Wx, 0=(Wi,...,Wk)\nProposition 2.2. Let W1, W2,..., Wk be weight matrices of sizes nk nk+1, k < K, and let. Fe(0), Fo(0) denote the risk minimizations using as in (4). Assume that n; min(n1, nK) for. j = 2...K - 1. Then F. () (and F.) is connected for all X and all K when k = 0, and for k > 0 when K = 2; and therefore there are no poor local minima in these cases. Moreover, any 0 can be connected to the lowest energy level with a strictly decreasing path..\nLet us highlight that this result is slightly complementary than that of Kawaguchi (2016), Theoren 2.3. Whereas we require n; min(n1, nk) for j = 2... K - 1 and our analysis does not inform about the order of the saddle points, we do not need full rank assumptions on x nor the weights Wk.\n2F()={0ERS; F(0)A}.\nIn particular, Ue has a Lie Group structure. In the half-rectified nonlinear case, the general linear group is replaced by the Lie group of homogeneous invertible matrices Fk = diag(a1,..., QN. with Q; > 0.\nThis proposition shows that a sufficient condition to prevent the existence of poor local minima is having connected level sets, but this condition is not necessary: one can have isolated local minima lying at the same energy level. This can be the case in systems that are defined up to a discrete symmetry group, such as multilayer neural networks. However, as we shall see next, this case puts the system in a brittle position, since one needs to be able to account for all the local minima (and there can be exponentially many of them as the parameter dimensionality increases) and verify that their energy is indeed equal.\nand the ridge regression R(0) = ||o|2. This model defines a non-convex (and non-concave) loss Fe(0). When k = 0, it has been shown in|Saxe et al.(2013) and Kawaguchi[(2016) that in this case, every local minima is a global minima. We provide here an alternative proof of that result that uses a somewhat simpler argument and allows for k > 0 in the case K = 2.\nThis result does also highlight a certain mismatch between the picture of having no poor local min ima and generalization error. Incorporating regularization drastically changes the topology, and the fact that we are able to show connectedness only in the two-layer case with ridge regression is pro. found; we conjecture that extending it to deeper models requires a different regularization, perhaps using more general atomic norms Bach (2013). But we now move our interest to the nonlinear case which is more relevant to our purposes."}, {"section_index": "3", "section_name": "2.3.1 NONLINEAR MODELS ARE GENERALLY DISCONNECTED", "section_text": "One may wonder whether the same phenomena of global connectedness also holds in the half-. rectified case. A simple motivating counterexample shows that this is not the case in general. Con- sider a simple setup with X E R2 drawn from a mixture of two Gaussians W_1 and N1, and let. Y = (X - z) : Z , where Z is the (hidden) mixture component taking {1, -1} values. Let. Y = (X; {W1, W2}) be a single-hidden layer ReLU network, with two hidden units. Let 0A be. a configuration that bisects the two mixture components, and let 0B the same configuration, but. swapping the bisectrices. One can verify that they can both achieve arbitrarily small risk by letting. the covariance of the mixture components go to 0. However, any path that connects 0A to 0B must. necessarily pass through a point in which W has rank 1, which leads to an estimator with risk at least 1/2.\nIn fact, it is easy to see that this counter-example can be extended to any generic half-rectified ar- chitecture, if one is allowed to adversarially design a data distribution. For any given (X; 0) with arbitrary architecture and current parameters 0 = (W), let Pe = {A1,..., As} be the underly- ing tessellation of the input space given by our current choice of parameters; that is, (X; 0) is piece-wise linear and Pe contains those pieces. Now let X be any arbitrary distribution with density\nthat there exist poor local minima. Let 0' be a different set of parameters, and Y'X = (X; 0 be a different target distribution. Now consider the data distribution given by the mixture\nXp(x) , z ~ Bernoulli(), YX,z = z(X;0) +(1z)(X;0\nBy adjusting the mixture component we can clearly change the risk at 0 and 0' and make them different, but we conjecture that this preserves the status of local minima of 0 and 0'. Appendix E constructs a counter-example numerically\nThis illustrates an intrinsic difficulty in the optimization landscape if one is after universal guarantee. that do not depend upon the data distribution. This difficulty is non-existent in the linear case anc. not easy to exploit in mean-field approaches such as Choromanska et al.(2015), and shows tha. in general we should not expect to obtain connected level sets. However, connectedness can b recovered if one is willing to accept a small increase of energy and make some assumptions on the. complexity of the regression task. Our main result shows that the amount by which the energy i. allowed to increase is upper bounded by a quantity that trades-off model overparametrization anc. smoothness in the data distribution.\nD(x;0) = WkpWk-1p...pWix, 0 = (Wi,..., Wk)\nWi bi W, = 0 1\nvhere b, contains the biases for each layer. For simplicity, we continue to use W, and x in the following.\np(x) > 0 for all x E Rn, for example a Gaussian, and let Y | X = (X; 0) . Since is invariant under a subgroup of permutations 0, of its hidden layers, it is easy to see that one can find two pa rameter values 0A = 0 and 0B = 0. such that F.(0A) = F.(0B) = 0, but any continuous path y(t) from 0A to 0p will have a different tessellation and therefore won't satisfy Fo(y(t)) = 0. Moreover, one can build on this counter-example to show that not only the level sets are disconnected, but also\nX p(x) . z ~ Bernoulli(). Y X.z = z(X:0) +(1 - z)(X:0'"}, {"section_index": "4", "section_name": "2.3.2 PRELIMINARIES", "section_text": "Before proving our main result, we need to introduce preliminary notation and results. We first describe the case with a single hidden layer of size m.\ne(m) = min E{|(X;0)-Y|2}+k|W21 W1ERmXn,I|W1(i)||2<1,W2ERm\nA fundamental property that will be essential to our analysis is that, despite the fact that Z is nonlinear, the quantity [w1, w2]z := Ep{z(w1)z(w2)} is locally equivalent to the linear metric. (w1, w2) x = Ep{w] XXT w2} = (w1, xw2), and that the linearization error decreases with the angle between w1 and w2. Without loss of generality, we assume here that w1 = w2 = 1, and we write ||w||? = E{|z(w)|2}.\nProposition 2.3. Let a = cos-1((w1, w2)) be the angle between unitary vectors w1 and w2 and le W1+W2 Wm be their unitary bisector. Then. W1+W2\ncos Q 1 + cos a + sin [W1,W2]z 2 2 2\nThe term |x|| is overly pessimistic: we can replace it by the energy of X projected into the. subspace spanned by w1 and w2 (which is bounded by 2|x D. When a is small, a Taylor expansion of the trigonometric terms reveals that.\n2 2 2 COS Q 3|Ex| 3 (1-a2/4)||wml|2-l|x||(a2/4+ a2) +O(a4 < < [w1, w2]z + O(a4) ,\nW1,w2|z<{w1,W2}l|wml[Z <x<w1,w2)\nThe local behavior of parameters w1, w2 on our regression problem is thus equivalent to that of hav-. ing a linear layer, provided w1 and w2 are sufficiently close to each other. This result can be seen as a spoiler of what is coming: increasing the hidden layer dimensionality m will increase the chances. to encounter pairs of vectors w1, w2 with small angle; and with it some hope of approximating the. previous linear behavior thanks to the small linearization error..\nIn order to control the connectedness, we need a last definition. Given a hidden layer of size m with current parameters W E Rn x m, we define a \"robust compressibility\" factor as\ndw(l, a;m) = min E{|Y-yZ(W)]2+kl[ylli}, (l< m) lly||o<l,supi|Z(wi,wi)|<a\nFor that purpose, we start with a characterization of the oracle loss, and for simplicity let us assume Y e R and let us first consider the case with a single hidden layer and l1 regularization: R(0) = |0|1\nto be the oracle risk using m hidden units with norm 1 and using sparse regression. It is a well. known result by Hornik and Cybenko that a single hidden layer is a universal approximator under very mild assumptions, i.e. limm->oo e(m) = 0. This result merely states that our statistical setup is. consistent, and it should not be surprising to the reader familiar with classic approximation theory. A more interesting question is the rate at which e(m) decays, which depends on the smoothness of the joint density (X, Y) ~ P relative to the nonlinear activation family we have chosen..\nFor convenience, we redefine W = W and = W, and Z(W) = max(0, WX). We also write z(w) = max(0, (w, X)) where (X, Y) ~ P and w E RN is any deterministic vector. Let x =. Ep X XT E RNx'N be the covariance operator of the random input X. We assume |x|| oo.\nThis quantity thus measures how easily one can compress the current hidden layer representation. by keeping only a subset of l its units, but allowing these units to move by a small amount controlled by a. It is a form of n-width similar to Kolmogorov width Donoho(2006) and is also related to robust sparse coding from Tang et al.(2013); Ekanadham et al.(2011)\nOur main result considers now a non-asymptotic scenario given by some fixed size m of the hid den layer. Given two parameter values 0A = (WA,WA) e W and oB = (WB,wB) with Fo(0{A,B}) A, we show that there exists a continuous path : [0,1] -> W connecting 0A and @B such that its oracle risk is uniformly bounded by max(, e), where e decreases with model Overparametrization.\n= inf(max dwA(m, O; m), Sws(m- l, a; m), wp(m, 0; m), 0wb(m-l, a;n F Cia+ O(\nSome remarks are in order. First, our regularization term is currently a mix between l2 norm con- straints on the first layer and l1 norm constraints on the second layer. We believe this is an artifact of our proof technique, and we conjecture that more general regularizations yield similar results. Next, this result uses the data distribution through the oracle bound e(m) and the covariance term. The extension to empirical risk is accomplished by replacing the probability measure P by the empirical measure P = t (x, y) - (xt, yt)). However, our asymptotic analysis has to be carefully re- examined to take into account and avoid the trivial regime when M outgrows L. A consequence of Theorem2.4|is that as m increases, the model becomes asymptotically connected, as proven in the following corollary.\nCorollary 2.5. As m increases, the energy gap e satisfies e = O(m- n) and therefore the level set become connected at all energy levels.\nThis is consistent with the overparametrization results from|Safran & Shamir(2015);Shamir(2016 and the general common knowledge amongst deep learning practitioners. Our next sections ex. plore this question, and refine it by considering not only topological properties but also some rough geometrical measure of the level sets."}, {"section_index": "5", "section_name": "3.1 THE GREEDY ALGORITHM", "section_text": "The intuition behind our main result is that, for smooth enough loss functions and for sufficient. overparameterization, it should be \"easy'' to connect two equally powerful models-i.e., two models. with F,0A,B X. A sensible measure of this ease-of-connectedness is the normalized length. of the geodesic connecting one model to the other: yA.b(t)[/[0A 0b]. This length represents. approximately how far of an excursion one must make in the space of models relative to the euclidean. distance between a pair of models. Thus, convex models have a geodesic length of 1, because. the geodesic is simply linear interpolation between models, while more non-convex models have geodesic lengths strictly larger than 1..\nBecause calculating the exact geodesic is difficult, we approximate the geodesic paths via a dynamic. programming approach we call Dynamic String Sampling. We comment on alternative algorithms in AppendixA\nTheorem 2.4. For any 0A, gB E W and X E R satisfying F,(0{A,B}) X, there exists a continuous path y : [0, 1] -> W such that y(0) = 0A, y(1) = 0B and\nFo(y(t)) < max(,e) , with\nFor a pair of models with network parameters 0, 0, each with Fe(0) below a threshold Lo, we aim to efficienly generate paths in the space of weights where the empirical loss along the path remains below Lo. These paths are continuous curves belonging to F()-that is, the level sets of the loss function of interest.\nAlgorithm 1 Greedy Dynamic String Sampling\n1: Lo Threshold below which path will be found 2: 1 randomly initialize 01, train (x 01) to Lo 3: 2 randomly initialize 02, train (x 02) to Lo 4:BeadList(1, 2) 5: Depth 0 6: procedure FINDCONNECTION(1, 2) 7: t* t such that dy(01,02,t) = 0 OR t = 0.5 dt 8: 3 train (x;t*0+(1-t*)02) to Lo 9: BeadList < insert(3, after 1, BeadList) 10: MaxError1 maxt(Fe(t03 +(1- t)01)) 1 1: MaxError2 maxt(Fe(t02 + (1- t)03)) 12: if Max Error1 > Lo then return FindConnection(1, 3) 13: if Max Error2 > Lo then return FindConnection(3, 2) 14: Depth Depth+1"}, {"section_index": "6", "section_name": "3.2 FAILURE CONDITIONS AND PRACTICALITIES", "section_text": "While the algorithm presented will faithfully certify two models are connected if the algorithn. converges, it is worth emphasizing that the algorithm does not guarantee that two models are dis connected if the algorithm fails to converge. In general, the problem of determining if two model. are connected can be made arbitrarily difficult by choice of a particularly pathological geometry fo. the loss function, so we are constrained to heuristic arguments for determining when to stop run. ning the algorithm. Thankfully, in practice, loss function geometries for problems of interest are no. intractably difficult to explore. We comment more on diagnosing disconnections more carefully ii Appendix E\nFurther, if the MaxError exceeds Lo for every new recursive branch as the algorithm progresses.. the worst case runtime scales as O(exp(Depth)). Empirically, we find that the number of new. models added at each depth does grow, but eventually saturates, and falls for a wide variety of models and architectures, so that the typical runtime is closer to O(poly(Depth))-at least up. until a critical value of Lo.\nTo aid convergence, either of the choices in line 7 of the algorithm works in practice- --choosing t* a. a local maximum can provide a modest increase in algorithm runtime, but can be unstable if the th calculated interpolated loss is particularly flat or noisy. t* = .5 is more stable, but slower. Finally. we find that training 3 to aLo for a < 1 in line 8 of the algorithm tends to aid convergence withou. noticeably impacting our numerics. We provide further implementation details in4"}, {"section_index": "7", "section_name": "4 NUMERICAL EXPERIMENTS", "section_text": "For our numerical experiments, we calculated normalized geodesic lengths for a variety of regression and classification tasks. In practice, this involved training a pair of randomly initialized models to the desired test loss value/accuracy/perplexity, and then attempting to connect that pair of models via. the Dynamic String Sampling algorithm. We also tabulated the average number of \"beads', or the number intermediate models needed by the algorithm to connect two initial models. For all of the below experiments, the reported losses and accuracies are on a restricted test set. For more complete. architecture and implementation details, see our|GitHub page.\nThe results are broadly organized by increasing model complexity and task difficulty, from easiest to hardest. Throughout, and remarkably, we were able to easily connect models for every dataset and architecture investigated except the one explicitly constructed counterexample discussed in Ap pendix[E.1] Qualitatively, all of the models exhibit a transition from a highly convex regime at high loss to a non-convex regime at low loss, as demonstrated by the growth of the normalized length as well as the monotonic increase in the number of required \"beads' to form a low-loss connection.\nThe algorithm recursively builds a string of models in the space of weights which continuously. connect 0, to 0;. Models are added and trained until the pairwise linearly interpolated loss, i.e maxFe(t0; + (1 t)0) for t E (0, 1), is below the threshold, Lo, for every pair of neighboring. models on the string. We provide a cartoon of the algorithm in Appendix C."}, {"section_index": "8", "section_name": "4.1 POLYNOMIAL REGRESSION", "section_text": "We studied a 1-4-4-1 fully connected multilayer perceptron style architecture with sigmoid nonlin earities and RMSProp/ADAM optimization. For ease-of-analysis, we restricted the training and test data to be strictly contained in the interval x E [0, 1] and f(x) E [0, 1]. The number of required beads, and thus the runtime of the algorithm, grew approximately as a power-law, as demonstrated. in Table[1|Fig. 1. We also provide a visualization of a representative connecting path between two models of equivalent power in AppendixD\nThe cubic regression task exhibits an interesting feature around Lo = .15 in Table|1|Fig. 2, where the normalized length spikes, but the number of required beads remains low. Up until this point, the\n0.00 0.02 0.04 0.06 0.08 0.10 0.12 (1b) La a Lg 3.0 2.5 2.0 I* 2 1.5 1.0 0.00 0.05 0.10 0.15 0.20 0.25 Lg (2b) 1.10 7.0 . 1 1.08 6.5 6.0 5.5 1.0 4.5 1.02 4.0 1.005 3.50.5 1.0 1.5 2.0 1.0 1.5 2.0 3.0 % error on test set 3a % error on test set 3b) 2.0 7. 6.5 5.5 5.0 4.5 4.0 3.5 3.0 1.0 2.5 60 70 20 40 5 80 20 30 % error on test set 4a % error on test set 4b) 2. 6.5 6.0 5.5 5.0 4.5 4.0 3.5 : 3.0 2.500 L00 200 300 400 500 600 700 800 900 200 300 400 500 600 700 800 900 55\ncubic model is strongly convex, so this first spike seems to indicate the onset of non-convex behavio. and a concomitant radical change in the geometry of the loss surface for lower loss."}, {"section_index": "9", "section_name": "4.2 CONVOLUTIONAL NEURAL NETWORKS", "section_text": "To test the algorithm on larger architectures, we ran it on the MNIST hand written digit recognitioi task as well as the CIFAR10 image recognition task, indicated in Table[1] Figs. 3 and 4. Again the data exhibits strong qualitative similarity with the previous models: normalized length remain low until a threshold loss value, after which it grows approximately as a power law. Interestingly the MNIST dataset exhibits very low normalized length, even for models nearly at the state of the art in classification power, in agreement with the folk-understanding that MNIST is highly conve and/or \"easy\". The CIFAR10 dataset, however, exhibits large non-convexity, even at the modest tes accuracy of 80%."}, {"section_index": "10", "section_name": "5 DISCUSSION", "section_text": "We have addressed the problem of characterizing the loss surface of neural networks from the per spective of gradient descent algorithms. We explored two angles - topological and geometrica. aspects - that build on top of each other..\nOn the one hand, we have presented new theoretical results that quantify the amount of uphill climb. ing that is required in order to progress to lower energy configurations in single hidden-layer ReLU networks, and proved that this amount converges to zero with overparametrization under mild con- ditions. On the other hand, we have introduced a dynamic programming algorithm that efficiently approximates geodesics within each level set, providing a tool that not only verifies the connected- ness of level sets, but also estimates the geometric regularity of these sets. Thanks to this informa tion, we can quantify how 'non-convex' an optimization problem is, and verify that the optimization of quintessential deep learning tasks - CIFAR-1O and MNIST classification using CNNs, and next word prediction using LSTMs - behaves in a nearly convex fashion up until they reach high accuracy. levels.\nThat said, there are some limitations to our framework. In particular, we do not address saddle-poir issues that can greatly affect the actual convergence of gradient descent methods. There are also number of open questions; amongst those, in the near future we shall concentrate on:\nTo gauge the generalizability of our algorithm, we also applied it to an LSTM architecture for solving the next word prediction task on the PTB dataset, depicted in Table|1|Fig. 5. Noteably, even for a radically different architecture, loss function, and data set, the normalized lengths produced by the DSS algorithm recapitulate the same qualitative features seen in the above datasets-i.e., models can be easily connected at high perplexity, and the normalized length grows at lower and lower perplexity after a threshold value, indicating an onset of increased non-convexity of the loss surface.\nExtending Theorem[2.4to the multilayer case. We believe this is within reach, since the main analytic tool we use is that small changes in the parameters result in small changes in the covariance structure of the features. That remains the case in the multilayer case. Empirical versus Oracle Risk. A big limitation of our theory is that right now it does not inform us on the differences between optimizing the empirical risk versus the oracle risk. Understanding the impact of generalization error and stochastic gradient in the ability to do small uphill climbs is an open line of research. Influence of symmetry groups. Under appropriate conditions, the presence of discrete sym- metry groups does not prevent the loss from being connected, but at the expense of increas ing the capacity. An important open question is whether one can improve the asymptotic properties by relaxing connectedness to being connected up to discrete symmetry. Improving numerics with Hyperplane method. Our current numerical experiments employ a greedy (albeit faster) algorithm to discover connected components and estimate geodesics We nlan to nerform."}, {"section_index": "11", "section_name": "ACKNOWLEDGMENTS", "section_text": "We would like to thank Mark Tygert for pointing out the reference to the e-nets and Kolmogorov capacity, and Martin Arjovsky for spotting several bugs in early version of the results. We would also like to thank Maithra Raghu and Jascha Sohl-Dickstein for enlightening discussions, as well as Yasaman Bahri for helpful feedback on an early version of the manuscript. CDF was supported by the NSF Graduate Research Fellowship under Grant DGE-1106400.\nAnna Choromanska, Mikael Henaff, Michael Mathieu, Gerard Ben Arous, and Yann LeCun. The loss surfaces of multilayer networks. In Proc. A1STATS, 2015..\nYann N Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex op timization. In Advances in Neural Information Processing Systems, pp. 2933-2941, 2014.\nDavid L Donoho. Compressed sensing. IEEE Transactions on information theory, 52(4):1289 1306, 2006.\nJohn Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning anc. stochastic optimization. Journal of Machine Learning Research. 12(Jul):2121-2159. 2011.\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training b reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015\nKenji Kawaguchi. Deep learning without poor local minima. arXiv preprint arXiv:1605.07110 2016.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprin arXiv:1412.6980, 2014.\nJason D Lee, Max Simchowitz, Michael I Jordan, and Benjamin Recht. Gradient descent converges to minimizers. University of California, Berkeley. 1050:16. 2016..\nAndrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynam ics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120, 2013.\nOhad Shamir. Distribution-specific hardness of learning neural networks. arXiv:1609.01037, 2016\nDaniel Soudry and Yair Carmon. No bad local minima: Data independent training error guarantees for multilaver neural networks. arXiy pr. print arXiv:1605.08361. 2016\nRoman Vershynin. Introduction to the non-asymptotic analysis of random matrices. arXiv preprin arXiv:1011.3027, 2010."}, {"section_index": "12", "section_name": "A CONSTRAINED DYNAMIC STRING SAMPLING", "section_text": "While the algorithm presented in Sec.3.1|is fast for sufficiently smooth families of loss surfaces. with few saddle points, here we present a slightly modified version which, while slower, provides. more control over the convergence of the string. We did not use the algorithm presented in this section for our numerical studies.\nInstead of training intermediate models via full SGD to a desired accuracy as in step 8 of the al. gorithm, intermediate models are be subject to a constraint that ensures they are \"close\"' to the neighboring models on the string. Specifically, intermediate models are constrained to the unique. hyperplane in weightspace equidistant from its two neighbors. This can be further modified by ad. ditional regularization terms to control the \"springy-ness\"' of the string. These heuristics could be chosen to try to more faithfully sample the geodesic between two models..\nIn practice, for a given model on the string, 0, these two regularizations augment the standard loss (0i-1-0i+1)/2 (0i-(0i-1-0i+1)/2) . The C regularization term controls the \"springy-ness\"' of the weightstring, and the k regularization term. controls how far off the hyperplane a new model can deviate..\nBecause adapting DSS to use this constraint is straightforward, here we will describe an alternative breadth-first' approach wherein models are trained in parallel until convergence. This alternative approach has the advantage that it will indicate a disconnection between two models \"sooner\"' in training. The precise geometry of the loss surface will dictate which approach to use in practice\nGiven two random models ; and ; where |i | < Lo, we aim to follow the evolution of the family of models connecting o; to . Intuitively, almost every continuous path in the space ol random models connecting o; to o; has, on average, the same (high) loss. For simplicity, we choose to initialize the string to the linear segment interpolating between these two models. If this entire segment is evolved via gradient descent, the segment will either evolve into a string which is entirel contained in a basin of the loss surface, or some number of points will become fixed at a higher loss These fixed points are difficult to detect directly, but will be indirectly detected by the persistence of a large interpolated loss between two adjacent models on the string.\nThe algorithm proceeds as follows\n.) Initialize model string to have two models, o; and\n2. If the pairwise interpolated loss between on and On+1 exceeds Lo(t), insert a new model at the maximum of the interpolated loss (or halfway) between these two models"}, {"section_index": "13", "section_name": "B.1 PROOF OF PROPOSITION|2.1", "section_text": "Suppose that 0j is a local minima and 02 is a global minima, but F(01) > F(02). If X = F(01) then clearly 01 and 02 both belong to NF(). Suppose now that NF() is connected. Then we\ncould find a smooth (i.e. continuous and differentiable) path y(t) with y(0) = 01, y(1) = 02 and F(y(t)) < X = F(01). But this contradicts the strict local minima status of 01, and therefore F() cannot be connected ."}, {"section_index": "14", "section_name": "B.2 PROOF OF PROPOSITION|2.2", "section_text": "Let us first consider the case with k = 0. We proceed by induction over the number of layers K For K = 1, the loss F(0) is convex. Let 0A, gB be two arbitrary points in a level set x. Thus. F(0A) X and F(0B) X. By definition of convexity, a linear path is sufficient in that case to. connect 0A and 0B:\nF((1 t)0A +t0b)<(1-t)F(0A) )+tF(0B)< X\ni Wk*-1(0) = WA ,Wk*-1(1) =WB k*_1 ii We*(t)Wz*-1(t) = Wz*-1(t) fort E(0,1)\ni Wk*-1(O)=WA_1 ,Wk*-1(1) =WB *_1 ii Wk*(O)=WA,Wk*(1)=WB ii We*(t)Wr*.. *1(t) fort E (0.1)\nWk*(t)=tWB+(1-t)WA+t(1-t)V\nWk*(t)Wk+(t)'=INz,\nFinally, let us prove that the result is also true when K = 2 and k > 0. We construct the path using the variational properties of atomic norms Bach (2013). When we pick the ridge regression regularization, the corresponding atomic norm is the nuclear norm:.\n7 l|X|* = min UVT=X\nI|W{A,B\nE{|Y - WX|2}+2e||W|l*\nVt, Wi(t),W2(t) = arg min (u2+lIv2) UVT=W(t)\nwhich is convex with respect to W. Thus a linear path in W from WA to WB is guaranteed to be below Fo(0{A,B}). Let us define\nV s 1(s)2(s) = WA and I[31(s)2 + 32(s)|2 decreases ,\nand similarly for (WP, W) to (W1(1), W2(1). The path (A1,2}(s), W{1,2}(t), B1,2}(s)) satisfies (i-iii) by definition. We also verify that.\nA(w1,w2) ={x E Rn;{x,w1) 0,{x,w2) O}\nE{max(0,{X,w1))max(0,(X,w2))} W1; (x,w1)(x,w2>dP(x), A(w1,W2) (Q(x),w1)Q(x),w2)(dP(Q(x))) J Q(A(w1,w2))\nx,w1>(x,w2) =r x,wm)I2 _Ix,d)12\n(x,w1){x,w2)dP(x) AW1W2 r2|{x,wm)|2-]<x,d)|2]dP(x)- (x,w (x, d) LP(r B A(w1,W2) E1 E2\n0< Ei <r2|sin(a)|2 l|x|2dP(x) r2|sin(a)|22||xlI 3\n1 cos(a 0 E2 |d|2 lx||2dP(x) < IYx 2 J A(w1,W2)\nby direct application of Cauchy-Schwartz. The proof is completed by plugging the bounds from 18) and (19) into (17) .\nW1(t)|I2 +||W2(t)|I2 =2|W(t)|I* 2(1 - t)||W(0)|l* + 2t||W(1)|I* (1-t)(l|wll?o)+||w|I?(o))+t(w(?(1)+||wI?(1))\nFinally, we verify that the paths we have just created, when applied to 0A arbitrary and @B = 0* a global minimum, are strictly decreasing, again by induction. For K = 1, this is again an immediate consequence of convexity. For K > 1, our inductive construction guarantees that for any 0 < t < 1 the path 0(t) = (W(t))k<K satisfies Fo(0(t)) < F.(0A). This concludes the proof .\nwhere Q is the orthogonal projection onto the space spanned by w1 and w2 and dP(x) = dP(x1, x2) is the marginal density on that subspace. Since this projection does not interfere with the rest of the proof, we abuse notation by dropping the Q and still referring to dP(x) as the probability density."}, {"section_index": "15", "section_name": "B.4 PROOF OF THEOREm2.4", "section_text": "Consider a generic and l m. A path from 0A to 0B will be constructed by concatenating the following paths:\nThe proof will study the increase in the loss along each subpath and aggregate the resulting increase into a common bound.\nConcerning subpaths (3) and (4), we notice that they can also be constructed using only parameters of the second layer, by observing that one can fit into a single n m parameter matrix both the (m - l)-term approximation and the oracle l-term approximation. Indeed, let us describe subpath (3) in detail ( subpath (4) is constructed analogously by replacing the role of 0sA with 0sB). Let W the first-layer parameter matrix associated with the m - l-sparse solution 0sA, and let YA denote its second layer coefficients, which is a m-dimensional vector with at most m - l non-zero coeffi cients. Let W* be the first-layer matrix of the l-term oracle approximation, and y* the corresponding second-layer coefficients. Since there are only m - l columns of WA that are used, corresponding to the support of 7A, we can consider a path 0 that replaces the remaining l columns with those from W* while keeping the second-layer vector YA fixed. Since the modified columns correspond to zeros in YA, such paths have constant loss. Call W the resulting first-layer matrix, containing both the active m - l active columns of WA and the l columns of W* in the positions determined by the zeros of yA. Now we can consider the linear subpath that interpolates between YA and * while keeping the first layer fixed at W. Since again this is a linear subpath that only moves second-layer coefficients, it is non-increasing thanks to the convexity of the loss while fixing the first layer. We easily verify that at the end of this linear subpath we are using the oracle l-term approximation which has loss e(l), and therefore subpath (3) incurs in a loss that is bounded by its extremal values dws(m - l, a, m) and e(l).\nFinally, we need to show how to construct the subpaths (2) and (5), which are the most delicate step since they cannot be bounded using convexity arguments as above. Let WA be the resulting. perturbed first-layer parameter matrix with m -l sparse coefficients yA. Let us consider an auxiliary. regression of the form\nB =[1;0]2= [0;YA]\nE{Y -W}+ x[i= E{[Y iWA2}+ x][ill\nVt,L(t) = E{[Y -n(t)W2}+k][n(t)ll < max(L(O),L(1))\nLet us now approximate this augmented linear path with a path in terms of first and second layer weights.We consider\nn1(t)=(1-t)WA+tWA, and n2(t)=(1-t)1+tyA.\n1. from 0A to OiA, the best linear predictor using the same first layer as 0A 2. from 0A to 0sA, the best (m - l)-term approximation using perturbed atoms from 0A 3. from 0sA to 0* the oracle l term approximation, 4. from 0* to 0sB, the best (m -- l)-term approximation using perturbed atoms from @B 5. from OsB to OB, the best linear predictor using the same first layer as 0B, 6. from 0B to 0B\nSubpaths (1) and (6) only involve changing the parameters of the second layer while leaving the first-. layer weights fixed, which define a convex loss. Therefore a linear path is sufficient to guarantee that the loss along that path will be upper bounded by A on the first end and ows (m, 0, m) on the. other end.\nW =[WA;WA] E Rnx2m\nFo({n1(t),n2(t)}) =E{|Y - n2(t)Z(n1(t))l2} + k||n2(t)|l E{|Y-n2(t)Z(n1(t)]2}+x((1-t)|i||i+t|yA]|1 -L(t) +E{|Y -n2(t)Z(n1(t))|2} E{|Y-(1-t)iZ(WA)-tyAZ(WA)[2}\nFinally, we verify that\nE{|Y -n2(t)Z(n1(t))|2} -E{|Y -(1-t)iZ(WA)-tyAZ(WA)|2} 4a max(E|Y|2,E|Y2D|x|(k-1/2 + aE|Y2[k-1) + o(a2)\n2IE Y - n2(t) Z(n1 VE|Y2lIxlIn2|l+ a2(l|n2||1)2l|x| 4a max(1,E|Y2|||x|(l|n2|1 + a|n2|?) + o(a2) < 4a max(E|Y2],E|Y2D|x||(x-1+aV/E|Y2|k-2) + o(a2"}, {"section_index": "16", "section_name": "B.5 PROOF OF COROLLARY 2.5", "section_text": "Let us consider a generic first layer weight matrix W E Rnxm. Without loss of generality, we can assume that w = 1 for all k, since increasing the norm of wk within the unit ball has no penalty in the loss, and we can compensate this scaling in the second layer thanks to the homogeneity of the half-rectification. Since this results in an attenuation of these second layer weights, they too are guaranteed not to increase the loss.\nFromVershynin(2010) [Lemma 5.2] we verify that the covering number N(Sn-1, e) of the Eu clidean unit sphere Sn-1 satisfies\n2 n\n2 n Um = N(Sn 1+ TY Em\nVi< M,tE [0,1], +tw;wA 01 ): < 0\n1-t)1iz(w )+tYA.iz(WA) )=n2(t)iz(n1(t)i) +ni,\nWe have just constructed a path from 0A to 0B, in which all subpaths except (2) and (5) have energy. maximized at the extrema due to convexity, given respectively by X, dws (m, 0, m), dws(m. l, a, m), e(l), w (m - l, , m), and Sw (m, 0, m). For the two subpaths (2) and (5), (22) shows. that it is sufficient to add the corresponding upper bound to the linear subpath, which is of the form. Ca + o(a2) where C is an explicit constant independent of 0. Since l and a are arbitrary, we are free to pick the infimum, which concludes the proof. .\nThe contribution of the oracle error e(um) e(m) goes to zero as m -> oo by the fact that limm->oo e(m) exists (it is a decreasing, positive sequence) and that vm -> oo..\nmin E(') = min E{[Y -} Z(W_k)-xz(wk)2}+k(l[Zf1+[k) B' '=(f;k)ERk\nfor j<k-1 Otherwise.\nsince now 3, is a feasible solution for the pruned first layer\nSince (wk, wk-1) < em, it results from Proposition2.3|that\n(Wk) =z(Wk-1)+n\nE{|YTz(W_k)2}E{|Y Z(W_k) - E{|p1+ p2|2}E{|p1p2|2} 4E{|p1p2|} I - 31 Z (W_ (C + a)a ~ em,\nwhere C only depends on E{|Y|2 E We also veruty that I3..l1.\nBecause the weight matrices are anywhere from high to extremely high dimensional, for the pur poses of visualization we projected the models on the connecting path into a three dimensionsal sub space. Snapshots of the algorithm in progress for the quadratic regression task are indicated in Fig 3] This was done by vectorizing all of the weight matrices for all the beads for a given connecting. path, and then performing principal component analysis to find the three highest weight projections for the collection of models that define the endpoints of segments for a connecting path---i.e., the\nSince we have m vectors in the unit sphere, it results from the pigeonhole principle that at least one element of the net will be associated with at least Um = mum' ~ m' vectors; in other words, we are guaranteed to find amongst our weight vector W a collection Qm of vm m\" vectors that are all at an angle at most 2em apart. Let us now apply Theorem2.4by picking n = Vm and a = em. We need to see that the terms involved in the bound all converge to 0 as m -> co.\nLet us now verify that (m - Vm, Em, m) also converges to zero. We are going to prune the first. layer by removing one by one the vectors in Qm. Removing one of these vectors at a time incurs in an error of the order of em. Indeed, let wg be one of such vectors and let 3' be the solution of.\nwhere W_k is a shorthand for the matrix containing the rest of the vectors that have not been dis. carded yet. Removing the vector wg from the first layer increases the loss by a factor that is upper. bounded by E(p) - E(), where\ny(0i,0j) 0i 0i g\nFigure 2: A cartoon of the algorithm. a) : The initial two models with approximately the same loss, Lo. b) : The interpolated loss curve, in red, and its global maximum, occuring at t = t*. c) : The interpolated model O(0;, 0;, t*) is added and labeled 0;.. d) : Stochastic gradient descent is. performed on the interpolated model until its loss is below aLo. e) : New interpolated loss curves are calculated between the models, pairwise on a chain. f) : As in step c), a new model is inserted at the maxima of the interpolated loss curve between 0; and 0.j. g) : As in step d), gradient descent is performed until the model has low enough loss..\nFigure 3: Snapshots of Dynamic String Sampling in action for the quadratic regression task. The. string's coordinates are its projections onto the three most important principal axes of the fully converged string. (Top Left) One step into the algorithm, note the high loss between all of the. vertices of the path. (Top Right) An intermediate step of the algorithm. Portions of the string have. converged, but there are still regions with high interpolated loss. (Bottom Left) Near the end of the. algorithm. Almost the entire string has converged to low loss. (Bottom Right) The algorithm has. finished. A continuous path between the models has been found with low loss..\n0, discussed in the algorithm. We then projected the connecting string of models onto these three directions.\nFinally, projections onto pairs of principal components are indicated by the black curves"}, {"section_index": "17", "section_name": "E.1 A DISCONNECTION", "section_text": "As a sanity check for the algorithm, we also applied it to a problem for which we know that it is no possible to connect models of equivalent power by the arguments of section |2.3.1] The input data. is 3 points in R2, and the task is to permute the datapoints, i.e. map {x1, x2, x3} -> {x2, x3, x1}. This map requires at least 12 parameters in general for the three linear maps which take x; -> x.. for i, j E {{1, 2},{2, 3},{3, 1}}. Our archticture was a 2-3-2 fully connected neural network with. a single relu nonlinearity after the hidden layer-a model which clearly has 12 free parameters by construction. The two models we tried to connect were a single model, 0, and a copy of 0 with the. first two neurons in the hidden layer permuted, 0g. The algorithm fails to converge when initializec. with these two models. We provide a visualization of the string of models produced by the algorithn. in Fig.4\nIn general, a persistent high interpolated loss between two neighboring beads on the string of models could arise from either a slowly converging, connected pair of models or from a truly disconnected pair of models. \"Proving\" a disconnection at the level of numerical experiments is intractable ir general, but a collection of negative results--i.e., failures to converge- --are highly suggestive of a true disconnection.\n2 4 6 8 10 12 12 14 14 5 5 0 -5 0 35-30-25-20-15-10-5 5 35-30-25-20-15-10-5 10 10 15 15 0 20 0 20 02490 2 4 6 10 8 10 12 12 14 14 5 5 -5 0 -35-30-25-2015-10-5 5 35-30-25-20-15-10-5 10 10 15 15 0 20 0 20\n10 -14 12 -14 5 5 0 0 -35-30-25-2015-10-5 5 -35-30-25-2015-10 5 10 10 15 15 -20 -5 0 0 20\nO 249 -8 -8 10 10 12 -14 14 5 5 0 353025-2015-10-5 -5 35-30-25-2015-10-5 10 10 5 15 15 0 20 0 20\nThe color of the strings was chosen to be representative of the test loss under a log mapping, so that. extremely high test loss mapped to red, whereas test loss near the threshold mapped to blue. An animation of the connecting path can be seen on ourGithub page.\nFigure 4: These three figures are projections of the components of the 12-dimensional weight ma trices which comprise the models on the string produced by the DSS algorithm. The axes are the. principal components of the weight matrices, and the colors indicate test error for the model. For. more details on the figure generation, see Appendix|D] (Left) The string of models after 1 step. Note the high error at all points except the middle and the endpoints. (Middle) An intermediate stage. of the algorithm. Part of the string has converged, but a persistent high-error segment still exists. (Right) Even after running for many steps, the error persists, and the algorithm does not converge..\n0 -2 2 4 4 6 6 8 8 10 10 9 9 4 4 10 -8 2 -10 8 -6 0 -6 -4 0 4 2 2 2 2 0 0 4 0 2 4 -6 8 10 6 4 10 -8 2 6 0 -4 2 0 4\n0 2 2 4 4 -6 6 8 8 10 10 6 4 4 10 2 -10 -8 2 8 6 0 -6 0 -4 -2 -2 -4 -2 -2 0 -4 0\n0 2 4 6 -8 10 6 4 10 8 2 -6 0 -4 2 -2 0 4"}] |
BJAFbaolg | [{"section_index": "0", "section_name": "INTRODUCTION AND MOTIVATION", "section_text": "Many approaches for learning to generate high dimensional samples have been and are still actively. being investigated. These approaches can be roughly classified under the following broad categories.\n* Associate Fellow, Canadian Institute For Advanced Research (CIFAR)"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "n this work, we investigate a novel training procedure to learn a generative mode as the transition operator of a Markov chain, such that, when applied repeatedly or n unstructured random noise sample, it will denoise it into a sample that matche he target distribution from the training set. The novel training procedure to learr his progressive denoising operation involves sampling from a slightly differen hain than the model chain used for generation in the absence of a denoising tar get. In the training chain we infuse information from the training target exampl hat we would like the chains to reach with a high probability. The thus learne. ransition operator is able to produce quality and varied samples in a small numbe f steps. Experiments show competitive results compared to the samples gener ted with a basic Generative Adversarial Net.\nTo go beyond the relatively simpler tasks of classification and regression, advancing our ability to learn good generative models of high-dimensional data appears essential. There are many scenarios where one needs to efficiently produce good high-dimensional outputs where output dimensions have unknown intricate statistical dependencies: from generating realistic images, segmentations, text, speech, keypoint or joint positions, etc..., possibly as an answer to the same, other, or multiple input modalities. These are typically cases where there is not just one right answer but a variety of equally valid ones following a non-trivial and unknown distribution. A fundamental ingredient for such scenarios is thus the ability to learn a good generative model from data, one from which we can subsequently efficiently generate varied samples of high quality\n.Ordered visible dimension sampling (van den Oord et al.]2016,Larochelle & Murray 2011). In this type of auto-regressive approach, output dimensions (or groups of condition- ally independent dimensions) are given an arbitrary fixed ordering, and each is sampled conditionally on the previous sampled ones. This strategy is often implemented using a recurrent network (LSTM or GRU). Desirable properties of this type of strategy are that the exact log likelihood can usually be computed tractably, and sampling is exact. Unde- sirable properties follow from the forced ordering, whose arbitrariness feels unsatisfactory especially for domains that do not have a natural ordering (e.g. images), and imposes for high-dimensional output a long sequential generation that can be slow. Undirected graphical models with multiple layers of latent variables. These make infer- ence, and thus learning, particularly hard and tend to be costly to sample from (Salakhutdi- nov & Hinton2009). Directed graphical models trained as variational autoencoders (VAE) (Kingma & Welling 2014] Rezende et al.2014)\nSeveral of these approaches are based on maximizing an explicit or implicit model log-likelihood or a lower bound of its log-likelihood, but some successful ones are not e.g. GANs. The approach we propose here is based on the notion of \"denoising\" and thus takes its root in denoising autoencoders and the GSN type of approaches. It is also highly related to the non-equilibrium thermodynamics inverse diffusion approach of Sohl-Dickstein et al.(2015). One key aspect that distinguishes these types of methods from others listed above is that sample generation is achieved thanks to a learned stochastic mapping from input space to input space, rather than from a latent-space to input-space\nSpecifically, in the present work, we propose to learn to generate high quality samples through a process of progressive, stochastic, denoising, starting from a simple initial \"noise\"' sample generatec in input space from a simple factorial distribution i.e. one that does not take into account any dependency or structure between dimensions. This, in effect, amounts to learning the transitior operator of a Markov chain operating on input space. Starting from such an initial \"noise' input and repeatedly applying the operator for a small fixed number T of steps, we aim to obtain a high quality resulting sample, effectively modeling the training data distribution. Our training procedure uses a novel \"target-infusion' technique, designed to slightly bias model sampling to move towards a specific data point during training, and thus provide inputs to denoise which are likely under the model's sample generation paths. By contrast with Sohl-Dickstein et al.[(2015) which consists ir inverting a slow and fixed diffusion process, our infusion chains make a few large jumps and follow the model distribution as the learning progresses.\nWe are given a finite data set D containing n points in Rd, supposed drawn i.i.d from an unknown distribution q*. The data set D is supposed split into training, validation and test subsets Dtrain, Dvalid, Dtest. We will denote qtrain the empirical distribution associated to the training set, and use x to denote observed samples from the data set. We are interested in learning the parameters of a generative model p conceived as a Markov Chain from which we can efficiently sample. Note that we are interested in learning an operator that will display fast \"burn-in\" from the initial factorial \"noise\"' distribution, but beyond the initial T steps we are not concerned about potential slow mixing or being stuck. We will first describe the sampling procedure used to sample from a trained model, before explaining our training procedure.\nThe generative model p is defined as the followin 1pling procedure:\nAdversarially-trained generative networks. (GAN)(Goodfellow et al.|2014) Stochastic neural networks, i.e. networks with stochastic neurons, trained by an adapted form of stochastic backpropagation Generative uses of denoising autoencoders (Vincent et al.]2010) and their generalization. as Generative Stochastic Networks (Alain et al.|[2016) Inverting a non-equilibrium thermodynamic slow diffusion process (Sohl-Dickstein et al.. 2015) Continuous transformation of a distribution by invertible functions (Dinh et al.(2014), also. used for variational inference inRezende & Mohamed (2015))\nThe rest of this paper is structured as follows: Section 2 formally defines the model and training procedure. Section 3 discusses and contrasts our approach with the most related methods from the literature. Section 4 presents experiments that validate the approach. Section 5 concludes and proposes future work directions.\nUsing a simple factorial distribution p(0)(z(0)), draw an initial sample z(0) ~ p(0), where z(0) E Rd. Since p(0) is factorial, the d components of z(o) are independent: po cannot model any dependency structure. z(o) can be pictured as essentially unstructured random noise. : Repeatedly apply T times a stochastic transition operator p(t) (z(t)|z(t-1)), yielding a more \"denoised\"' sample z(t) ~ p(t)(z(t) |z(t-1)), where all z(t) E Rd.\nFigure 1: The model sampling chain. Each row shows a sample from p(z(o). z(T)) for a mode that has been trained on MNIST digits. We see how the learned Markov transition operator progres sively denoises an initial unstructured noise sample. We can also see that there remains ambiguity ir. the early steps as to what digit this could become. This ambiguity gets resolved only in later steps Even after a few initial steps, stochasticity could have made a chain move to a different final digi. shape."}, {"section_index": "2", "section_name": "2.3 INFUSION TRAINING PROCEDURE", "section_text": "We want to train the parameters of model p such that samples from Dtrain are likely of being gener-. ated under the model sampling chain. Let 0(0) be the parameters of p(0) and let 0(t) be the parameters of p(t) (z(t) |z(t-1)). Note that parameters 0(t) for t > 0 can straightforwardly be shared across time steps, which we will be doing in practice. Having committed to using (conditionally) factorial dis- tributions for our p(0)(z(0)) and p(t) (z(t) |z(t-1)), that are both easy to learn and cheap to sample from, let us first consider the following greedy stagewise procedure. We can easily learn po) (z(o)) to model the marginal distribution of each component x; of the input, by training it by gradient descent on a maximum likelihood objective, i.e..\nThis gives us a first, very. crude unstructured (factorial) model of q*\nOutput z(T) asthe final generated sample Our generative model distri bution is thus p(z(T)), the marginal 1 associated to joint p(z(o),... p(0)(z(0)\nIn summary, samples from model p are generated, starting with an initial sample from a simple distribution p(0), by taking the Tthsample along Markov chain z(0) > z(1) -> z(2) -> ... -> z(T) whose transition operator is p(t)(z(t)|z(t-1)). We will call this chain the model sampling chain. Figure1illustrates this sampling procedure using a model (i.e. transition operator) that was trained on MNIST. Note that we impose no formal requirement that the chain converges to a stationary distribution, as we simply read-out z(T) as the samples from our model p. The chain also needs not be time-homogeneous, as highlighted by notation p(t) for the transitions.\nThe set of parameters 0 of model p comprise the parameters of p(0) and the parameters of tran sition operator p(t)(z(t)|z(t-1)). For tractability, learnability, and efficient sampling, these dis-. tributions will be chosen factorial, i.e. p(0)(z(0) = II=1 p(o)(z(o) and p(t)(z(t)|z(t-1) = d-1 Pt)(z(t)|z(t-1).Note that the conditional distribution of an individual component i, p(t) (z(t)|z(t-1) may however be multimodal, e.g. a mixture in which case p(t) (z(t) |z(t-1)) would (t)7 be a product of independent mixtures (conditioned on z(t-1)), one per dimension. In our exper-. iments, we will take the p(t)(z(t)|z(t-1)) to be simple diagonal Gaussian yielding a Deep Latent. Gaussian Model (DLGM) as inRezende et al.(2014).\n) = argmax Ex~q 0 rair\nEventually, we expect a sequence of samples from Markov chain p to move from initial \"noise owards a specific example x from the training set rather than another one, primarily if a sampl long the chain \"resembles' x to some degree. This means that the transition operator should lear o pick up a minor resemblance with an x in order to transition to something likely to be eve nore similar to x. In other words, we expect samples along a chain leading to x to both hav igh probability under the transition operator of the chain p(t) (z(t) |z(t-1)), and to have some forn f at least partial \"resemblance\"' with x likely to increase as we progress along the chain. On ighly inefficient way to emulate such a chain of samples would be, for teach step t, to sampl nany candidate samples from the transition operator (a conditionally factorial distribution) until w generate one that has some minimal \"resemblance\"' to x (e.g. for a discrete space, this resemblanc neasure could be based on their Hamming distance). A qualitatively similar result can be obtaine t a negligible cost by sampling from a factorial distribution that is very close to the one given by th ransition operator, but very slightly biased towards producing something closer to x. Specifically ve can \"infuse\"' a little of x into our sample by choosing for each input dimension, whether w ample it from the distribution given for that dimension by the transition operator, or whether, witl small probability, we take the value of that dimension from x. Samples from this biased chain, i vhich we slightly \"infuse\" x, will provide us with the inputs of our input-target training pairs fo he transition operator. The target part of the training pairs is simply x.\nNote that dx. does not denote a Dirac-Delta but a Gaussian with small sigma\n2In all experiments, we use an increasing schedule a(t) + w with Q and w constant. This allows. to build our chain such that in the first steps, we give little information about the target and in the last steps we give more informations about the target. This forces the network to have less confidence (greater incertitude) at the beginning of the chain and more confidence on the convergence point at the end of the chain..\nHaving learned this p(o), we might be tempted to then greedily learn the next stage p(1) of. the chain in a similar fashion, after drawing samples z(0) ~ p(o) in an attempt to learn to. \"denoise\"' the sampled z(o) into x. Yet the corresponding following training objective (1) =. pendently of each other so z(0) contains no information about x, hence p(1) (x|z(0)) = p(1)(x). So maximizing this second objective becomes essentially the same as what we did when learning p(0).. We would learn nothing more. It is essential, if we hope to learn a useful conditional distribution. p(1)(x[z(o)) that it be trained on particular z(0) containing some information about x. In other. words, we should not take our training inputs to be samples from p(0) but from a slightly different. distribution, biased towards containing some information about x. Let us call it q(o)(z(o)|x). A. natural choice for it, if it were possible, would be to take q(0) (z(0) |x) = p(z(0) |z(T) = x) but this. is an intractable inference, as all intermediate z(t) between z(0) and z(T) are effectively latent states. that we would need to marginalize over. Using a workaround such as a variational or MCMC ap-. proach would be a usual fallback. Instead, let us focus on our initial intent of guiding a progressive stochastic denoising, and think if we can come up with a different way to construct q(0) (z(0) |x) and. (t) ((t) ((t-1).x) similarly for the next stens..\nFigure 2: Training infusion chains, infused with target x = This figure shows the evolutior of chain q(z(0),..., z(30)|x) as training on MNIST progresses. Top row is after network random weight initialization. Second row is after 1 training epochs, third after 2 training epochs, and so on Each of these images were at a time provided as the input part of the (input, target) training pairs for the network. The network was trained to denoise all of them into target 3. We see that as training progresses, the model has learned to pick up the cues provided by target infusion, to move towards that target. Note also that a single denoising step, even with target infusion, is not sufficient for the network to produce a sharp well identified digit.\nFor all x E D raln\nAs illustrated in Figure 2] the distribution of samples from the infusion chain evolves as training progresses, since this chain remains close to the model sampling chain..\nThe exact log-likelihood of the generative model implied by our model p is intractable. The log probability of an example x can however be expressed using proposal distribution q as:.\np(z,x) log p(x) = log Eq(z|x) q(z|x)\nUsing Jensen's inequality we can thus derive the following lower bound.\nlog p(x) Eq(z|x) [logp(z,x) - log q(Z|x)]\nwhere logp(z,x) = logp(0)(z(0)) + (t=1 logp(t)(z(t)|z(t-1)) + logp(T)(x|z(T-1) Iog q(z|x) = log q(0)(z(0)|x) + T=1 log q(t)(z(t)|z(t-1),x).\n3Since we will be sharing parameters between the p(t), in order for the expected larger error gradients on. the earlier transitions not to dominate the parameter updates over the later transitions we used an increasing schedule n(t) = no for t E {1,...,T}\nlog p(t) (x(z(t-1):e(t) de(t)\nA stochastic estimation can easily be obtained by replacing the expectation by an average using a few samples from q(z|x). We can thus compute a lower bound estimate of the average log likelihood over training, validation and test data.."}, {"section_index": "3", "section_name": "2.4.1 LOWER-BOUND-BASED INFUSION TRAINING PROCEDURE", "section_text": "Generating samples as a repeated application of a Markov transition operator that operates on input space is at the heart of Markov Chain Monte Carlo (MCMC) methods. They allow sampling from an energy-model, where one can efficiently compute the energy or unnormalized negated log probabil-. ity (or density) at any point. The transition operator is then derived from an explicit energy function such that the Markov chain prescribed by a specific MCMC method is guaranteed to converge to. the distribution defined by that energy function, as the equilibrium distribution of the chain. MCMC. techniques have thus been used to obtain samples from the energy model, in the process of learning. to adjust its parameters.\nBy contrast here we do not learn an explicit energy function, but rather learn directly a parameterized transition operator, and define an implicit model distribution based on the result of running the Markov chain.\nVariational auto-encoders (VAE) (Kingma & Welling2014] Rezende et al.]2014) also start fror an unstructured (independent) noise sample and non-linearly transform this into a distribution tha matches the training data. One difference with our approach is that the VAE typically maps from lower-dimensional space to the observation space. By contrast we learn a stochastic transition oper ator from input space to input space that we repeat for T steps. Another key difference, is that th VAE learns a complex heavily parameterized approximate posterior proposal q whereas our infusio based q can be understood as a simple heuristic proposal distribution based on p. Importantly th specific heuristic we use to infuse x into q makes sense precisely because our operator is a map fron input space to input space, and couldn't be readily applied otherwise. The generative network ir Rezende et al.(2014) is a Deep Latent Gaussian Model (DLGM) just as ours. But their approximat posterior q is taken to be factorial, including across all layers of the DLGM, whereas our infusioi (z(t-1) . x) based q involves an ordered sampling of the layers, as we sample from q(t\nMore recent proposals involve sophisticated approaches to sample from better approximate poste. riors, as the work of Salimans et al.(2015) in which Hamiltonian Monte Carlo is combined with variational inference, which looks very promising, though computationally expensive, and Rezende. & Mohamed(2015) that generalizes the use of normalizing flows to obtain a better approximate posterior.\nSimilarly in addition to the lower-bound based on Eq3 we can use the same few samples from q(z|x) to get an importance-sampling estimate of the likelihood based on Eq.2f.\nSince we have derived a lower bound on the likelihood, we can alternatively choose to optimize this. stochastic lower-bound directly during training. This alternative lower-bound based infusion train- ing procedure differs only slightly from the denoising-based infusion training procedure by using z(t) as a training target at step t (performing a gradient step to increase log p(t) (z(t)|z(t-1); g(t)) whereas denoising training always uses x as its target (performing a gradient step to increase. log p(t)(x|z(t-1); g(t))). Note that the same reparametrization trick as used in Variational Auto- encoders (Kingma & Welling|2014) can be used here to backpropagate through the chain's Gaussian. sampling.\n4Specifically, the two estimates (lower-bound and IS) start by collecting k samples from q(z|x) and com. puting for each the corresponding l = log p(z, x) - log q(z|x). The lower-bound estimate is then obtained. by averaging the resulting l1,... lk, whereas the IS estimate is obtained by taking the log of the averaged lk (in a numerical stable manner as logsumexp(l1, ..., l) - log k)..\nEarlier works that propose to directly learn a transition operator resulted from research to turn au. toencoder variants that have a stochastic component, in particular denoising autoencoders (Vincent. et al.[2010), into generative models that one can sample from. This development is natural, since. a stochastic auto-encoder is a stochastic transition operator form input space to input space. Gen. erative Stochastic Networks (GSN) (Alain et al.|2016) generalized insights from earlier stochastic. autoencoder sampling heuristics (Rifai et al.]2012) into a more formal and general framework. These previous works on generative uses of autoencoders and GSNs attempt to learn a chain whose. equilibrium distribution will fit the training data. Because autoencoders and the chain are typically. started from or very close to training data points, they are concerned with the chain mixing quickly. between modes. By contrast our model chain is always restarted from unstructured noise. and is not required to reach or even have an equilibrium distribution. Our concern is only what happens. during the T \"burn-in\"' initial steps, and to make sure that it transforms the initial factorial noise. distribution into something that best fits the training data distribution. There are no mixing concerns. beyond those T initial steps.\nA related aspect and limitation of previous denoising autoencoder and GSN approaches is that thes were mainly \"local' around training samples: the stochastic operator explored space starting fron. and primarily centered around training examples, and learned based on inputs in these parts of spac. only. Spurious modes in the generated samples might result from large unexplored parts of spac that one might encounter while running a long chain.."}, {"section_index": "4", "section_name": "3.4 REVERSING A DIFFUSION PROCESS IN NON-EQUILIBRIUM THERMODYNAMICS", "section_text": "Drawing on both [Sohl-Dickstein et al.|(2015) and the walkback procedure introduced for GSN ir Alain et al.[(2016), a variational variant of the walkback algorithm was investigated by Goyal et al (2017) at the same time as our work. It can be understood as a different approach to learning a Markov transition operator, in which a \"heating\" diffusion operator is seen as a variational approxi mate posterior to the forward \"cooling\" sampling operator with the exact same form and parameters except for a different temperature.\nWe trained models on several datasets with real-valued examples. We used as prior distribution p(0) a factorial Gaussian whose parameters were set to be the mean and variance for each pixel. through the training set. Similarly, our models for the transition operators are factorial Gaussians. Their mean and elementwise variance is produced as the output of a neural network that receives the previous z(t-1) as its input, i.e. p(t)(z(t)|z(t-1)) = N(;(z(t-1)), o?(z(t-1)) where and 2 are computed as output vectors of a neural network. We trained such a model using our infusion training procedure on MNIST (LeCun & Cortes1998), Toronto Face Database (Susskind et al. 2010), CIFAR-10 (Krizhevsky & Hinton2009), and CelebA (Liu et al.]2015). For all datasets, the only preprocessing we did was to scale the integer pixel values down to range [0,1]. The network.\nThe approach of[Sohl-Dickstein et al.(2015) is probably the closest to the approach we develop here Both share a similar model sampling chain that starts from unstructured factorial noise. Neither. are concerned about an equilibrium distribution. They are however quite different in several key aspects: Sohl-Dickstein et al.[(2015) proceed to invert an explicit diffusion process that starts from a training set example and very slowly destroys its structure to become this random noise, they then learn to reverse this process i.e. an inverse diffusion. To maintain the theoretical argument that. the exact reverse process has the same distributional form (e.g. p(x(t-1)|x(t)) and p(x(t) |x(t-1)) both factorial Gaussians), the diffusion has to be infinitesimal by construction, hence the proposed. approaches uses chains with thousands of tiny steps. Instead, our aim is to learn an operator that can. yield a high quality sample efficiently using only a small number T of larger steps. Also our infusion training does not posit a fixed a priori diffusion process that we would learn to reverse. And while the distribution of diffusion chain samples of [Sohl-Dickstein et al.[(2015) is fixed and remains the same all along the training, the distribution of our infusion chain samples closely follow the model. chain as our model learns. Our proposed infusion sampling technique thus adapts to the changing. generative model distribution as the learning progresses.\nrained on MNIST and TFD is a MLP composed of two fully connected layers with 1200 unit using batch-normalization (Ioffe & Szegedy2015)[5] The network trained on CIFAR-10 is base on the same generator as the GANs of|Salimans et al.(2016), i.e. one fully connected layer followe y three transposed convolutions. CelebA was trained with the previous network where we adde another transposed convolution. We use rectifier linear units (Glorot et al.]2011) on each laye nside the networks. Each of those networks have two distinct final layers with a number of unit corresponding to the image size. They use sigmoid outputs, one that predict the mean and the secon hat predict a variance scaled by a scalar (In our case we chose = 0.1) and we add an epsilo = 1e - 4 to avoid an excessively small variance. For each experiment, we trained the networl on 15 steps of denoising with an increasing infusion rate of 1% (w = 0.01, (O) =0), except 01 CIFAR-10 where we use an increasing infusion rate of 2% (w = 0.02, (0) = 0) on 20 steps."}, {"section_index": "5", "section_name": "4.1 NUMERICAL RESULTS", "section_text": "Since we can't compute the exact log-likelihood, the evaluation of our model is not straightforward. However we use the lower bound estimator derived in Section[2.4|to evaluate our model during train-. ing and prevent overfitting (see Figure3. Since most previous published results on non-likelihood based models (such as GANs) used a Parzen-window-based estimator (Breuleux et al.f|2011), we use it as our first comparison tool, even if it can be misleading (Lucas Theis & Bethge2016). Results. are shown in Table[1] we use 10 000 generated samples and o = 0.17 . To get a better estimate of the log-likelihood, we then computed both the stochastic lower bound and the importance sampling estimate (IS) given in Section2.4 For the IS estimate in our MNIST-trained model, we used 20 000 intermediates samples. In Table2|we compare our model with the recent Annealed Importance. Sampling results (Wu et al.]2016). Note that following their procedure we add an uniform noise of 1/256 to the (scaled) test point before evaluation to avoid overevaluating models that might have. overfitted on the 8 bit quantization of pixel values. Another comparison tool that we used is the Inception score as in Salimans et al.[(2016) which was developed for natural images and is thus most relevant for CIFAR-10. Since[Salimans et al.(2016) used a GAN trained in a semi-supervised way with some tricks, the comparison with our unsupervised trained model isn't straightforward. However, we can see in Table 3 that our model outperforms the traditional GAN trained without. labeled data."}, {"section_index": "6", "section_name": "4.2 SAMPLE GENERATION", "section_text": "Another common qualitative way to evaluate generative models is to look at the quality of the sam- ples generated by the model. In Figure4|we show various samples on each of the datasets we used In order to get sharper images, we use at sampling time more denoising steps than in the training time (In the MNIST case we use 30 denoising steps for sampling with a model trained on 15 denois- ing steps). To make sure that our network didn't learn to copy the training set, we show in the last column the nearest training-set neighbor to the samples in the next-to last column. We can see that our training method allow to generate very sharp and accurate samples on various dataset.\n5We don't share batch norm parameters across the network, i.e for each time step we have different param eters and independent batch statistics.\nTable 3: Inception score (with standard error) of 50 000 samples generated by models trained on CIFAR-10. We use the models in Salimans et al.(2016) as baseline. 'SP' corresponds to the best mode1 described by Salimans et al.(2016) trained in a semi-supervised fashion. '-L' corresponds to the same model after removing the label in the training process (unsupervised way), '-MBF' corresponds to a supervised training without minibatch features.\nlel Real data SP -L -MBF Infusion training ption score 11.24 .12 8.09 .07 4.36 .06 3.87 .03 4.62 .06\n1600 Infusion training curves 1400 1200 1000 Train Lower Bound 800 Valid Lower Bound Train Parzen 600 Valid Parzen 400 200 0 -200 0 200 400 600 800 1000 number of Epochs\nFigure 3: Training curves: lower bounds on aver- age log-likelihood on MNIST as infusion training. progresses. We also show the lower bounds esti-. mated with the Parzen estimation method\nTable 2: Log-likelihood (in nats) estimated by AIS on MNIST test and training sets as reported in Wu et al.(2016) and the log likelihood estimates of our model obtained by infusion training (last three lines). Our initial model uses a Gaussian output with diagonal covariance, and we appliec both our lower bound and importance sampling (IS) log-likelihood estimates to it. Since Wu et al (2016) used only an isotropic output observation model, in order to be comparable to them, we alsc evaluated our model after replacing the output by an isotropic Gaussian output (same fixed variance for all pixels). Average and standard deviation over 10 repetitions of the evaluation are provided Note that AIS might provide a higher evaluation of likelihood than our current IS estimate, but this is left for future work.\nModel Test log-likelihood (1000ex) Train log-likelihood (100ex) VAE-50 (AIS) 991.435 6.477 1272.586 6.759 GAN-50 (AIS) 627.297 8.813 620.498 31.012 GMMN-50 (AIS) 593.472 8.591 571.803 30.864 VAE-10 (AIS) 705.375 7.411 780.196 19.147 GAN-10 (AIS) 328.772 5.538 318.948 22.544 GMMN-10 (AIS) 346.679 5.860 345.176 19.893 Infusion training + isotropic 413.297 0.460 450.695 1.617 (IS estimate) Infusion training (IS 1836.27 0.551 1837.560 1.074 estimate) Infusion training (lower 1350.598 0.079 1230.305 0.532 bound)\n1836.27 0.551 1350.598 0.079\nTable 1: Parzen-window-based estimator of lower bound on average test log-likelihood on MNIST (in nats).\n595 633 2 (a) MNIST (b) Toronto Face Dataset (c) CIFAR-10 (d) CelebA\nFigure 4: Mean predictions by our models on 4 different datasets. The rightmost column shows the nearest training example to the samples in the next-to last column"}, {"section_index": "7", "section_name": "4.3 INPAINTING", "section_text": "Another method to evaluate a generative model is inpainting. It consists of providing only a partia image from the test set and letting the model generate the missing part. In one experiment, we provide only the top half of CelebA test set images and clamp that top half throughout the sampling chain. We restart sampling from our model several times, to see the variety in the distribution of the bottom part it generates. Figure |5|shows that the model is able to generate a varied set of bottom halves, all consistent with the same top half, displaying different type of smiles and expression. We also see that the generated bottom halves transfer some information about the provided top half o the images (such as pose and more or less coherent hair cut).\nWe presented a new training procedure that allows a neural network to learn a transition operato f a Markov chain. Compared to the previously proposed method of Sohl-Dickstein et al.(2015 oased on inverting a slow diffusion process, we showed empirically that infusion training require ar fewer denoising steps, and appears to provide more accurate models. Currently, many success ul generative models, judged on sample quality, are based on GAN architectures. However thes equire to use two different networks, a generator and a discriminator, whose balance is reputed del cate to adjust, which can be source of instability during training. Our method avoids this probler y using only a single network and a simpler training objective.\nDenoising-based infusion training optimizes a heuristic surrogate loss for which we cannot (yet) provide theoretical guarantees, but we empirically verified that it results in increasing log-likelihood estimates. On the other hand the lower-bound-based infusion training procedure does maximize an explicit variational lower-bound on the log-likelihood. While we have run most of our experiments with the former, we obtained similar results on the few problems we tried with lower-bound-based infusion training.\nFigure 5: Inpainting on CelebA dataset. In each row, from left to right: an image form the test set; the same image with bottom half randomly sampled from our factorial prior. Then several end samples from our sampling chain in which the top part is clamped. The generated samples show that our model is able to generate a varied distribution of coherent face completions.\nand also to powerful inference methods such as|Rezende & Mohamed (2015). As future work, we also plan to investigate the use of more sophisticated neural net generators, similar to DCGAN's (Radford et al.l 2016) and to extend the approach to a conditional generator applicable to structured output problems."}, {"section_index": "8", "section_name": "ACKNOWLEDGMENTS", "section_text": "We would like to thank the developers of Theano (Theano Development Team[2016) for making this. library available to build on, Compute Canada and Nvidia for their computation resources, NSER( and Ubisoft for their financial support, and three ICLR anonymous reviewers for helping us improve. our paper."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Laurent Dinh, David Krueger, and Yoshua Bengio. Nice: Non-linear independent components esti mation. arXiv preprint arXiv:1410.8516, 2014.\nXavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. Ir Aistats, volume 15, pp. 275, 2011.\nAnirudh Goyal, Nan Rosemary Ke, Alex Lamb, and Yoshua Bengio. The variational walkbacl algorithm. Technical report, Universite de Montreal, 2017. URL https: //openreview.. net/forum?id=rkpdnIqlx On openreview.net..\nDiederik P Kingma and Max Welling. Auto-encoding variational bayes. In Proceedings of the 2n International Conference on Learning Representations (ICLR 2014), 2014\nAlex. Krizhevsky and Geoffrey E Hinton. Learning multiple layers of features from tiny images. Master's thesis, Department of Computer Science, University of Toronto. 2009\nYann LeCun and Corinna Cortes. The mnist database of handwritten digits, 1998\nYujia Li, Kevin Swersky, and Richard Zemel. Generative moment matching networks. In Interna tional Conference on Machine Learning (1CML 2015), pp. 1718-1727, 2015.\nIan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair. Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling C. Cortes, N. D. Lawrence, and K. Q. Weinberger (eds.), Advances in Neural Information Pro cessing Systems 27, pp. 2672-2680. Curran Associates, Inc., 2014.\nHugo Larochelle and Iain Murray. The neural autoregressive distribution estimator. In AISTATS volume 1, pp. 2, 2011.\nZiwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild In Proceedings of International Conference on Computer Vision (ICCV 2015), December 2015.\nDanilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation an approximate inference in deep generative models. In Proceedings of the 31th International Con ference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014, pp. 1278-1286 2014. URLhttp://jmlr.org/proceedings/papers/v32/rezende14.htm1\nSalah Rifai, Yoshua Bengio, Yann Dauphin, and Pascal Vincent. A generative process for sam- pling contractive auto-encoders. In Proceedings of the 29th International Conference on Machine Learning (ICML 2012), 2012\nTim Salimans, Diederik Kingma, and Max Welling. Markov chain monte carlo and variational inference: Bridging the gap. In Proceedings of The 32nd International Conference on Machine Learning, pp. 1218-1226, 2015.\nTim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen Improved techniques for training gans. CoRR, abs/1606.03498, 2016\nJosh M Susskind. Adam K Anderson. and Geoffrey E Hinton. The toronto face database. Depart ment of Computer Science, University of Toronto, Toronto, ON, Canada, Tech. Rep, 3, 2010.\nPascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of Machine Learning Research. 11(Dec):3371-3408. 2010.\nYuhuai Wu. Yuri Burda, Ruslan Salakhutdinov, and Roger B. Grosse. On the quantitative analysis of decoder-based generative models. CoRR, abs/1611.04273, 2016\nRuslan Salakhutdinov and Geoffrey E Hinton. Deep boltzmann machines. In AISTATS, volume 1. pp. 3, 2009."}, {"section_index": "10", "section_name": "A.1 MNIST EXPERIMENTS", "section_text": "t -1 We show the impact of the infusion rate a(t) + w for different numbers of training steps on the lower bound estimate of log-likelihood on the Validation set of MNIST in Figure |6 We alsc. show the quality of generated samples and the lower bound evaluated on the test set in Table|4] Eacl. experiment in Table|4|uses the corresponding models of Figure 6|that obtained the best lower bounc. value on the validation set. We use the same network architecture as described in Section|4] i.e twc. fully connected layers with Relu activations composed of 1200 units followed by two distinct fully. connected layers composed of 784 units, one that predicts the means, the other one that predicts. the variances. Each mean and variance is associated with one pixel. All of the the parameters oi. the model are shared across different steps except for the batch norm parameters. During training. we use the batch statistics of the current mini-batch in order to evaluate our model on the train anc validation sets. At test time (Table4), we first compute the batch statistics over the entire train se. for each step and then use the computed statistics to evaluate our model on the test test..\nWe did some experiments to evaluate the impact of a or w in a(t) = t- 1 + w.Figure6 shows that as the number of steps increases, the optimal value for infusion rate decreases. Therefore, if we want to use many steps, we should have a small infusion rate. These conclusions are valid for both increasing and constant infusion rate. For example, the optimal a for a constant infusion rate, in Figure |6e|with 10 steps is 0.08 and in Figure 6f|with 15 steps is 0.06. If the number of steps is not enough or the infusion rate is too small, the network will not be able to learn the target distribution as shown in the first rows of all subsection in Table.\nIn order to show the impact of having a constant versus an increasing infusion rate, we show in Fig ure7|the samples created by infused and sampling chains. We observe that having a small infusion rate over many steps ensures a slow blending of the model distribution into the target distribution\nIn Table 4, we can see high lower bound values on the test set with few steps even if the mode can't generate samples that are qualitatively satisfying. These results indicate that we can't rely or the lower bound as the only evaluation metric and this metric alone does not necessarily indicate the suitability of our model to generated good samples. However, it is still a useful tool to preven overfitting (the networks in Figure 6e and 6f overfit when the infusion rate becomes too high Concerning the samples quality, we observe that having a small infusion rate over an adequate number of steps leads to better samples"}, {"section_index": "11", "section_name": "A.2 INFUSION AND MODEL SAMPLING CHAINS ON NATURAL IMAGES DATASETS", "section_text": "In order to show the behavior of our model trained by Infusion on more complex datasets, we show in Figure 8|chains on CIFAR-10 dataset and in Figure 9|chains on CelebA dataset. In each Figure, the first sub-figure shows the chains infused by some test examples and the second sub- figure shows the model sampling chains. In the experiment on CIFAR-10, we use an increasing schedule a(t) = Q (t-1) + 0.02 with (0) = 0 and 20 infusion steps (this corresponds to the training (t-1) parameters). In the experiment on CelebA, we use an increasing schedule (t) = a + 0.01 with q(0) = 0 and 15 infusion steps.\nEffects of infuse rate on the lower bound with 1 step 1500 1000 500 C 500 Infuse rate [] 0.0 0.15 0.3 0.6 0.05 0.2 0.4 0.7 1000 0.1 0.25 0.5 0.9 1500 200 400 600 800 1000 Epochs\n(a) Networks trained with 1 infusion step. Each in fusion rate in the figure corresponds to a(o). Since we have only one step, we have w = 0.\nEffects of infuse rate on the lower bound trained with 10 steps 1500 1000 500 500 Infuse rate [] - 0.0 0.02 0.04 0.06 1000 0.01 0.03 0.05 0.07 1500 200 400 600 800 1000 Epochs c) Networks trained with 10 infusion steps. Ea nfusion rate corresponds to w. We set a(o) = 0 Effects of infuse rate on the lower bound trained with 10 steps 1500 1000 500\n(c) Networks trained with 10 infusion steps. Each infusion rate corresponds to w. We set a(o) = 0.\nEffects of infuse rate on the lower bound trained with 10 steps 1500 1000 (coe pne) punnq undr 500 500 Infuse rate [a] 0.0 0.05 0.1 0.2 0.02 0.08 -0.15 -0.3 1000 0.03 1500 0 200 400 600 800 1000\ne) Networks trained with 10 infusion steps. In this experiment we use the same infusion rate for each time step such that V(t) = (o). Each infusion ate in the figure corresponds to different values for (0)\nEffects of infuse rate on the lower bound trained with 5 steps 1500 1000 500 500 Infuse rate [w] 0.0 -0.03 0.08 0.15 1000 0.01 0.05 -0.1 0.2 1500 0 200 400 600 800 1000 Enochs\nEffects of infuse rate on the lower bound trained with 15 steps 1500 1000 (oe pann) penng dnnn 500 500 Infuse rate [] 0.0 0.02 0.04 0.05 1000 0.01 0.03 1500 200 400 600 800 1000 Epochse d) Networks trained with 15 infusion steps. Ea infusion rate corresponds to w. We set a(o) = 0. Effects of infuse rate on the lower bound trained with 15 steps 1500 1000 500 500 Infuse rate [] 0.0 0.03 0.06 0.15\nEffects of infuse rate on the lower bound trained with 15 steps 1500 1000 500 500 Infuse rate [w] -0.0 0.02 0.04 0.05 1000 0.01 -0.03 1500 0 200 400 600 800 1000\n(d) Networks trained with 15 infusion steps. Each\nEffects of infuse rate on the lower bound trained with 15 steps 1500 1000 500 500 Infuse rate [a 0.0 0.03 0.06 0.15 0.01 0.04 0.08 0.2 1000 0.02 0.05 0.1 0.3 1500 () 200 400 600 800 1000\n(f) Networks trained with 15 infusion steps. In this. experiment we use the same infusion rate for each. time step such that Vta(t) Q(0) Each infu- sion rate in the figure corresponds to different values\nFigure 6: Training curves on MNIST showing the log likelihood lower bound (nats) for different infusion rate schedules and different number of steps. We use an increasing schedule a(t) = a. t-1) w. In each sub-figure for a fixed number of steps, we show the lower bound for different infusion rates.\nTable 4: Infusion rate impact on the lower bound log-likelihood (test set) and the samples generated by a network trained with different number of steps. Each sub-table corresponds to a fixed number of steps. Each row corresponds to a different infusion rate, where we show its lower bound and alsc its corresponding generated samples from the trained model. Note that for images, we show the mean of the Gaussian distributions instead of the true samples. As the number of steps increases, the optimal infusion rate decreases. Higher number of steps contributes to better qualitative samples, as the best samples can be seen with 15 steps using Q = 0.01.\ninfusion rate Lower bound (test) 0.0 824.50 0.01 1351.03 0.02 1066.60 0.03 609.10 0.04 876.93 0.05 -479.69 0.06 -941.78\n(a) infusion rate impact on the lower bound log-likelihood (test set) and the samples generated by a network trained with 1 step.\ninfusion rate. Lower bound (test). Means of the model 0.0 824.34 9 0.05 885.35 0.1 967.25 0.15 1063.27 0.2 1115.15 0.25 1158.81 3 0.3 1209.39 0.4 1209.16 0.5 1132.05 0.6 1008.60 0.7 854.40 0.9 -161.37\n(b) infusion rate impact on the lower bound log-likelihood (test set) and the samples generated by a networ trained with 5 steps\ninfusion rate. Lower bound (test) 0.0 823.81 0.01 910.19 0.03 1142.43 0.05 1303.19 0.08 1406.38 0.1 1448.66 0.15 1397.41 0.2 1262.57\nc) infusion rate impact on the lower bound log-likelihood (test set) and the samples generated by a networ trained with 10 steps\ninfusion rate Lower bound (test) 0.0 824.42 0.01 1254.07 0.02 1389.12 0.03 1366.68 0.04 1223.47 0.05 1057.43 0.05 846.73 9 0.07 658.66\n(d) infusion rate impact on the lower bound log-likelihood (test set) and the samples generated by a networ trained with 15 steps\n(a) Chains infused with MNIST test set samples by a constant rate ((0) = 0.05, w = 0) in 15 steps.\n(c) Chains infused with MNIST test set samples by an increasing rate ((0) = 0.0, w = 0.01) in 15 steps.\nFigure 7: Comparing samples of constant infusion rate versus an increasing infusion rate on infused and generated chains. The models are trained on MNIST in 15 steps. Note that having an increasing infusion rate with a small value for w allows a slow convergence to the target distribution. In contrast having a constant infusion rate leads to a fast convergence to a specific point. Increasing infusion rate leads to more visually appealing samples. We observe that having an increasing infusion rate over many steps ensures a slow blending of the model distribution into the target distribution.\n(b) Model sampling chains on MNIST using a net- work trained with a constant infusion rate ((0) 0.05, w = 0) in 15 steps\nE -. I\nd) Model sampling chains on MNIST using a network trained with an increasing infusion rate (0) = 0.0. w = 0.01) in 15 steps.\n(a) Infusion chains on CIFAR-10. Last column corresponds to the target used to infuse the chain\nFigure 8: Infusion chains (Sub-Figure[8a) and model sampling chains (Sub-Figure 8b) on CIFAR 10.\n(b) Model sampling chains on CIFAR-10\n(a) Infusion chains on CelebA. Last column corresponds to the target used to infuse the chain\n300030000000000\nFigure 9: Infusion chains (Sub-Figure[9a) and model sampling chains (Sub-Figure[9b) on CelebA"}] |
ryT9R3Yxe | [{"section_index": "0", "section_name": "GENERATIVE PARAGRAPH VECTOR", "section_text": "Ruqing Zhang. Jiafeng Xu& Xueqi Cheng\nCAS Key Lab of Network Data Science and Technology Institute of Computing Technology, Chinese Academy of Sciences Beijing, China\nThe recently introduced Paragraph Vector is an efficient method for learning high. quality distributed representations for pieces of texts. However, an inherent lim- itation of Paragraph Vector is lack of ability to infer distributed representations for texts outside of the training set. To tackle this problem, we introduce a Gen- erative Paragraph Vector, which can be viewed as a probabilistic extension of the Distributed Bag of Words version of Paragraph Vector with a complete generative. process. With the ability to infer the distributed representations for unseen texts,. we can further incorporate text labels into the model and turn it into a supervised version, namely Supervised Generative Paragraph Vector. In this way, we can leverage the labels paired with the texts to guide the representation learning, and employ the learned model for prediction tasks directly. Experiments on five text classification benchmark collections show that both model architectures can yield superior classification performance over the state-of-the-art counterparts.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "A central problem in many text based applications, e.g., sentiment classification (Pang & Lee]2008) question answering (Stefanie Tellex & Marton.2003) and machine translation (I. Sutskever & Le. 2014), is how to capture the essential meaning of a piece of text in a fixed-length vector. Per- haps the most popular fixed-length vector representations for texts is the bag-of-words (or bag-of-n- grams) (Harris| 1954). Besides, probabilistic latent semantic indexing (PLSI) (Hofmann 1999) and latent Dirichlet allocation (LDA) (Blei & Jordan]2003) are two widely adopted alternatives..\nA recent paradigm in this direction is to use a distributed representation for texts (T. Mikolov & Dean2013a).In particular, Le and Mikolov (Quoc Le]2014} Andrew M.Dai2014) show tha their method, Paragraph Vector (PV), can capture text semantics in dense vectors and outperforn many existing representation models. Although PV is an efficient method for learning high-qualit distributed text representations, it suffers a similar problem as PLSI that it provides no model on tex vectors: it is unclear how to infer the distributed representations for texts outside of the training se with the learned model (i.e., learned text and word vectors). Such a limitation largely restricts th usage of the PV model, especially in those prediction focused scenarios.\nInspired by the completion and improvement of LDA over PLSI, we first introduce the Generativ Paragraph Vector (GPV) with a complete generation process for a corpus. Specifically, GPV can be viewed as a probabilistic extension of the Distributed Bag of Words version of Paragraph Vector (PV DBOw), where the text vector is viewed as a hidden variable sampled from some prior distributions and the words within the text are then sampled from the softmax distribution given the text and wor vectors. With a complete generative process, we are able to infer the distributed representation of new texts based on the learned model. Meanwhile, the prior distribution over text vectors als acts as a regularization factor from the view of optimization, thus can lead to higher-quality tex representations.\nMore importantly, with the ability to infer the distributed representations for unseen texts, we now can directly incorporate labels paired with the texts into the model to guide the representation learn ing, and turn the model into a supervised version, namely Supervised Generative Paragraph Vecto (SGPV). Note that supervision cannot be directly leveraged in the original PV model since it has nc"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "generalization ability on new texts. By learning the SGPV model, we can directly employ SGPV. to predict labels for new texts. As we know, when the goal is prediction, fitting a supervised model. would be a better choice than learning a general purpose representations of texts in an unsupervised. way. We further show that SGPV can be easily extended to accommodate n-grams so that we can. take into account word order information, which is important in learning semantics of texts.\nWe evaluated our proposed models on five text classification benchmark datasets. For the unsuper. vised GPV, we show that its superiority over the existing counterparts, such as bag-of-words, LDA, PV and FastSent (Felix Hill2016). For the SGPV model, we take into comparison both traditional supervised representation models, e.g. MNB (S. Wang]2012), and a variety of state-of-the-art deep neural models for text classification (Kim2014|N. Kalchbrenner2014] Socher & Potts]2013||Irsoy & Cardie|2014). Again we show that the proposed SGPV can outperform the baseline methods by a substantial margin, demonstrating it is a simple yet effective model.."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "Many text based applications require the text input to be represented as a fixed-length feature vector.. The most common fixed-length representation is bag-of-words (BoW) (Harris1954). For example. in the popular TF-IDF scheme (Salton & McGill] 1983), each document is represented by tfidf. values of a set of selected feature-words. However, the BoW representation often suffers from data sparsity and high dimension. Meanwhile, due to the independent assumption between words, Bo. representation has very little sense about the semantics of the words..\nTo address this shortcoming, several dimensionality reduction methods have been proposed, such. as latent semantic indexing (LSI) (S. Deerwester & Harshman|1990), Probabilistic latent semantic indexing (PLSI) (Hofmann|1 1999) and latent Dirichlet allocation (LDA) (Blei & Jordan2003). Both PLSI and LDA have a good statistical foundation and proper generative model of the documents,. as compared with LSI which relies on a singular value decomposition over the term-document co-. occurrence matrix. In PLSI, each word is generated from a single topic, and different words in a document may be generated from different topics. While PLSI makes great effect on probabilistic modeling of documents, it is not clear how to assign probability to a document outside of the training. set with the learned model. To address this issue, LDA is proposed by introducing a complete gen- erative process over the documents, and demonstrated as a state-of-the-art document representation method. To further tackle the prediction task, Supervised LDA (David M.Blei 2007) is developed. by jointly modeling the documents and the labels..\nRecently, distributed models have been demonstrated as efficient methods to acquire semantic rep. resentations of texts. A representative method is Word2Vec (Tomas Mikolov & Dean2013b), whicl. can learn meaningful word representations in an unsupervised way from large scale corpus. Tc represent sentences or documents, a simple approach is then using a weighted average of all the. words. A more sophisticated approach is combing the word vectors in an order given by a parse. tree (Richard Socher & Ng]2012). Later, Paragraph Vector (PV) (Quoc Le]2014) is introduced tc directly learn the distributed representations of sentences and documents. There are two variants ir. PV, namely the Distributed Memory Model of Paragraph Vector (PV-DM) and the Distributed Bag. of Words version of Paragraph Vector (PV-DBOw), based on two different model architectures. Although PV is a simple yet effective distributed model on sentences and documents, it suffers a similar problem as PLSI that it provides no model on text vectors: it is unclear how to infer the. distributed representations for texts outside of the training set with the learned model.\nThe rest of the paper is organized as follows. We first review the related work in section|2|and briefly. describe PV in section[3] We then introduce the unsupervised generative model GPV and supervised generative model SGPV in section4|and section 5|respectively. Experimental results are shown in section6|and conclusions are made in section7\nBesides these unsupervised representation learning methods, there have been many supervised deep models with directly learn sentence or document representations for the prediction tasks. Recursive Neural Network (RecursiveNN) (Richard Socher & Ng2012) has been proven to be efficient in terms of constructing sentence representations. Recurrent Neural Network (RNN) (Ilya Sutskever & Hinton2011) can be viewed as an extremely deep neural network with weight sharing across time. Convolution Neural Network (CNN) (Kim] 2014) can fairly determine discriminative phrases in a\nWi+1 n W! the cat sat on Projection dn the cat sat on\nFigure 1: Distributed Bag of Words version of paragraph vectors. The paragraph vector is used tc predict the words in a small window (\"the\", \"cat\"', \"sat' and \"on').\ntext with a max-pooling layer. However, these deep models are usually quite complex and thus the training would be time-consuming on large corpus..\nSince our model can be viewed as a probabilistic extension of the PV-DBOW model with a complete generative process, we first briefly review the PV-DBOW model for reference.\nIn PV-DBOW, each text is mapped to a unique paragraph vector and each word is mapped to a. unique word vector in a continuous space. The paragraph vector is used to predict target words randomly sampled from the paragraph as shown in Figure[1] More formally, Let D={d1, . .., d}. denote a corpus of N texts, where each text dn = (w, w2,..., w),n E 1, 2,..., N is an ln-. length word sequence over the word vocabulary V of size M. Each text d E D and each word w E V is associated with a vector d E RK and w E RK, respectively, where K is the embedding. dimensionality. The predictive objective of the PV-DBOw for each word wf E dn is defined by the. softmax function\nexp(w? . d p(w;[dn) w'ev exp(w' : 0"}, {"section_index": "4", "section_name": "The PV-DBOw model can be efficiently trained using the stochastic gradient descent (Rumelhart & Williams1986) with negative sampling (T. Mikolov & Dean2013a).", "section_text": "In this section, we introduce the GPV model in detail. Overall, GPV is a generative probabilistic model for a corpus. We assume that for each text, a latent paragraph vector is first sampled from some prior distributions, and the words within the text are then generated from the normalized exponential (i.e. softmax) distribution given the paragraph vector and word vectors. In our work,. multivariate normal distribution is employed as the prior distribution for paragraph vectors. It could\nW? the cat sat on rojection on the cat sat on\nAs compared with traditional topic models, e.g. PLSI and LDA, PV-DBOw conveys the following. merits. Firstly, PV-DBOw using negative sampling can be interpretated as a matrix factorization. over the words-by-texts co-occurrence matrix with shifted-PMI values (Omer Levy & Ramat-Gan. 2015). In this way, more discriminative information (i.e., PMI) can be modeled in PV as compared. with the generative topic models which learn over the words-by-texts co-occurrence matrix with raw. frequency values. Secondly, PV-DBOw does not have the explicit \"topic\" layer and allows words. automatically clustered according to their co-occurrence patterns during the learning process. In this. way, PV-DBOw can potentially learn much finer topics than traditional topic models given the same. hidden dimensionality of texts. However, a major problem with PV-DBOw is that it provides no. model on text vectors: it is unclear how to infer the distributed representations for unseen texts.\nW dn w n W U N N Yn\nFigure 2: (Left) A graphical model representation of Generative Paragraph Vector (GPV). (The boxes are \"plates\"' representing replicates; a shaded node is an observed variable; an unshaded node is a hidden variable.) (Right) Graphical model representation of Supervised Generative Paragraph Vector (SGPV).\nbe replaced by other prior distributions and we will leave this as our future work. The specific generative process is as follows:\nFor each text d, E D, n = 1, 2,..., N:. (a) Draw paragraph vector dn ~ N(, ) (b) For each word w? E dn,i = 1,2,...,. Draw word w? softmar(d...I.\nwhere W denotes a k M word embedding matrix with W*; = w;, and softmax(dn . W), is the softmax function defined the same as in Equation (1). Figure[2|(Left) provides the graphical mode of this generative process. Note that GPV differs from PV-DBOw in that the paragraph vector is a. hidden variable generated from some prior distribution, which allows us to infer the paragraph vectoi. over future texts given the learned model. Based on the above generative process, the probability o the whole corpus can be written as follows:.\nN 1 p(dn\\,) D p(w}|W,dn)dd n=1 w?Edn\nTo learn the model, direct maximum likelihood estimation is not tractable due to non-closed form of the integral. We approximate this learning problem by using MAP estimates for dn, which can be formulated as follows:\nIp(dn|,) I (*, *, W*) = arg max p(w?|W, dn ,,W w?Edn\nwhere dn denotes the MAP estimate of dn for dn, (*, *, W*) denotes the optimal solution. Note that for computational simplicity, in this work we fixed as a zero vector and as a identity matrix In this way, all the free parameters to be learned in our model are word embedding matrix W. By taking the logarithm and applying the negative sampling idea to approximate the softmax function we obtain the final learning problem.\nN =-||dn|12+ > (logo(wr.dn)+kEw'~Pnw logo(-w' dn)) w?Edn\nwhere o(x) = 1/(1 + exp(-x)), k is the number of \"negative\"' samples, w' denotes the sampled word and Pnw denotes the distribution of negative word samples. As we can see from the final objective function, the prior distribution over paragraph vectors actually act as a regularization term. From the view of optimization, such regularization term could constrain the learning space and usually produces better paragraph vectors.\nFor optimization, we use coordinate ascent, which first optimizes the word vectors W while leaving the MAP estimates (d) fixed. Then we find the new MAP estimate for each document while leaving the word vectors fixed, and continue this process until convergence. To accelerate the learning, we adopt a similar stochastic learning framework as in PV which iteratively updates W and estimates d by randomly sampling text and word pairs.\nAt prediction time, given a new text, we perform an inference step to compute the paragraph vector for the input text. In this step, we freeze the vector representations of each word, and apply the same MAP estimation process of d as in the learning phase. With the inferred paragraph vector of the test text, we can feed it to other prediction models for different applications."}, {"section_index": "5", "section_name": "5 SUPERVISED GENERATIVE PARAGRAPH VECTOR", "section_text": "With the ability to infer the distributed representations for unseen texts, we now can incorporate the labels paired with the texts into the model to guide the representation learning, and turn the model into a more powerful supervised version directly towards prediction tasks. Specifically, we introduce an additional label generation process into GPV to accommodate text labels, and obtain the Supervised Generative Paragraph Vector (SGPV) model. Formally, in SGPV, the n-th text dn and the corresponding class label yn. E {1, 2, . .., C} arise from the following generative process:\nFor each text dn E D, n = 1, 2, ..., N: (a) Draw paragraph vector dn ~ N(, ) (b) For each word w? E dn, i = 1, 2,..., Draw word w? ~ softmax(dn . W). (c) Draw label yn|dn, U, b ~ softmax(U . dn.\nwhere U is a C K matrix for a dataset with C output labels, and b is a bias term\nThe graphical model of the above generative process is depicted in Figure2|(Right). SGPV defines the probability of the whole corpus as follows\nN p(D)=II p(dn\\,E)(11 p(w|W,dn))p(yn|dn,U,b)dd n=1 w?Edn\nThe above SGPV may have limited modeling ability on text representation since it mainly relies on uni-grams. As we know, word order information is often critical in capturing the meaning of texts. For example, \"machine learning\" and \"learning machine\" are totally different in meaning with the same words. There has been a variety of deep models using complex architectures such as convolution layers or recurrent structures to help capture such order information at the expense of large computational cost.\nHere we propose to extend SGPV by introducing an additional generative process for n-grams, so that we can incorporate the word order information into the model and meanwhile keep its simplicity in learning. We name this extension as SGPV-ngram. Here we take the generative process of SGPV bigram as an example\nFor each text dn E D, n = 1, 2,..., N: (a) Draw paragraph vector dn ~ N(, ) (b) For each word w? E dn, i = 1, 2,...,l. Draw word wn. oftmar(d.\nWe adopt a similar learning process as GPV to estimate the model parameters. Since the SGPV includes the complete generative process of both paragraphs and labels, we can directly leverage it to predict the labels of new texts. Specifically, at prediction time, given all the learned model parameters, we conduct an inference step to infer the paragraph vector as well as the label using MAP estimate over the test text.\n(c) For each bigram g E dn, i = 1, 2, ..., Sn : Draw bigram gt ~ softmax(dn : G) (d) Draw label y . U.b ~ softmax(U . dn+\nN p(D)=I p(dn|,E)( p(w}|W,dn))(I p(g|G,dn))p(yn|dn,U,b)dd w?Edn gI Edn\nIn this section, we introduce the experimental settings and empirical results on a set of text classi fication tasks.\nWe made use of five publicly available benchmark datasets in comparison\nSubj: Subjectivity dataset (Pang & Lee|[2004) which contains 5, 000 subjective instances and 5, 000 objective instances. The task is to classify a sentence as being subjective or objective.\nSST-1: Stanford Sentiment Treebank (Socher & Potts|2013)3] SST-1 is provided with train/dev/test splits of size 8, 544/1, 101/2, 210. It is a fine-grained classification over five classes: very negative negative, neutral, positive, and very positive.\nSST-2: SST-2 is the same as SST-1 but with neutral reviews removed. We use the standard train/dev/test splits of size 6, 920/872/1, 821 for the binary classification task\nPreprocessing steps were applied to all datasets: words were lowercased, non-English characters. and stop words occurrence in the training set are removed. For fair comparison with other published. results, we use the default train/test split for TREC, SST-1 and SST-2 datasets. Since explicit split. of train/test is not provided by subj and MR datasets. we use 10-fold cross-validation instead.\nIn our model. text and word vectors are randomly initialized with values uniformly distributed in the range of [-0.5, +0.5]. Following the practice in (Tomas Mikolov & Dean2013b) , we set the noise. distributions for context and words as Pnw(w) #(w)0.75. We adopt the same linear learning rate. strategy where the initial learning rate of our models is O.025. For unsupervised methods, we use. support vector machines (SVM)as the classifier.\nWe adopted both unsupervised and supervised methods on text representation as baselines\nBag-of-word-TFIDF and Bag-of-bigram-TFIDF. In the bag-of-word-TFIDF scheme (Salton & McGilll 1983) , each text is represented as the tf-idf value of chosen feature-words. The bag-of.\nhttp://cogcomp.cs.illinois.edu/Data/QA/Qc/ https://www.cs.cornell.edu/people/pabo/movie-review-data http://nlp.stanford.edu/sentiment http://www.csie.ntu.edu.tw/~cjlin/libsvm/.\nwhere G denotes a K S bigram embedding matrix with G*j = gj, and S denotes the size of bigram vocabulary. The joint probability over the whole corpus is then defined as\nMR: Movie reviews (Pang & Lee 2005)[with one sentence per review. There are 5, 331 positive. sentences and 5, 331 negative sentences. The objective is to classify each review into positive or negative category.\ncBow (Tomas Mikolov & Dean2013b). Continuous Bag-Of-Words model. We use average pooling as the global pooling mechanism to compose a sentence vector from a set of word vectors\nPV (Quoc Le]2014). Paragraph Vector is an unsupervised model to learn distributed representations of words and paragraphs\nFastSent (Felix Hill|2016). In FastSent, given a simple representation of some sentence in context the model attempts to predict adjacent sentences\nNote that unlike LDA and GPV. LSI. cBow, and FastSent cannot infer the representations of unseen texts. Therefore, these four models need to fold-in all the test data to learn representations together. with training data, which makes it not efficient in practice.\nDAN (Mohit Iyyer & III2015). Deep averaging network uses average word vectors as the inpu and applies multiple neural layers to learn text representation under supervision.\nCNN-multichannel 1 (Kim 2014). CNN-multichannel employs convolutional neural network for sentence modeling.\nDCNN (N. Kalchbrenner]2014). DCNN uses a convolutional architecture that replaces wide con volutional layers with dynamic pooling layers.\nDependency Tree-LSTM (Kai Sheng Tai & Manning2015). The Dependency Tree-LSTM base. on LSTM structure uses dependency parses of each sentence..\nWe first evaluate the GPV model by comparing with the unsupervised baselines on the TREC, Sub and MR datasets. As shown in table[1] GPV works better than PV over the three tasks. It demon strates the benefits of introducing a prior distribution (i.e., regularization) over the paragraph vectors Moreover, GPV can also outperform almost all the baselines on three tasks except Bow-TFIDF anc Bigram-TFIDF on the TREC collection. The results show that for unsupervised text representation bag-of-words representation is quite simple yet powerful which can beat many embedding models Meanwhile, by using a complete generative process to infer the paragraph vectors, our model can achieve the state-of-the-art performance among the embedding based models.\nWe compare SGPV model to supervised baselines on all the five classification tasks. Empirical res ults are shown in Table 2] We can see that SGPV achieves comparable performance against othe deep learning models. Note that SGPV is much simpler than these deep models with significantly less parameters and no complex structures. Moreover, deep models with convolutional layers or re current structures can potentially capture compositional semantics (e.g.. phrases). while SGPV onl\nLSI (S. Deerwester & Harshman1990) and LDA (Blei & Jordan]2003). LSI maps both texts and words to lower-dimensional representations in a so-called latent semantic space using SVD lecomposition. In LDA, each word within a text is modeled as a finite mixture over an underlying set of topics. We use the vanilla LSI and LDA in the gensim library with topic number set as 100.\nDRNN (Irsoy & Cardiel2014). Deep Recursive Neural Networks is constructed by stacking multiple\nTable 1: Performance Comparison of Unsupervised Representation Models\nModel TREC Subj MR BoW-TFIDF 97.2 89.8 76.7 Bigram-TFIDF 97.6 90.9 76.1 LSI 88 85.4 64.2 LDA 81.3 71 61.6 cBow (Han Zhao & Poupart 2015 87.3 91.3 77.2 PV (Han Zhao & Poupart 2015 91.8 90.5 74.8 FastSent (Felix Hill. 2016 76.8 88.7 70.8 GPV 93 91.7 77.9\nBy introducing bi-grams, SGPV-bigram can outperform all the other deep models on four tasks In particular, the improvements of SGPV-bigram over other baselines are significant on SST-1 and SST-2. These results again demonstrated the effectiveness of our proposed SGPV model on text representations. It also shows the importance of word order information in modeling text semantics.\nTable 2: Performance Com arison of Supervised Representation Models\nModel SST-1 SST-2 TREC Subj MR NBSVM (S. Wang 2012 93.2 79.4 MNB S. Wang 2012 93.6 1 79 DAN [Mohit Iyyer & III 2015 47.7 86.3 CNN-multichannel (Kim 2014 47.4 88.1 92.2 93.2 81.1 DCNN (N. Kalchbrenner 2014) 48.5 86.8 93 MV-RNN (Richard Socher & Ng 2012 44.4 82.9 79 DRNN (Irsoy & Cardie 2014) 49.8 86.6 1 1 Dependency Tree-LSTM (Kai Sheng Tai & Manning 2015 48.4 85.7 1 1 SGPV 44.6 86.3 93.2 92.4 79.2 SGPV-bigram 55.9 91.8 95.8 93.6 79.8\nIn this paper, we introduce GPV and SGPV for learning distributed representations for pieces of texts. With a complete generative process, our models are able to infer vector representations as well as labels over unseen texts. Our models keep as simple as PV models, and thus can be effi- ciently learned over large scale text corpus. Even with such simple structures, both GPV and SGPV can produce state-of-the-art results as compared with existing baselines, especially those complex deep models. For future work, we may consider other probabilistic distributions for both paragraph vectors and word vectors.\nrelies on uni-gram. In this sense, SGPV is quite effective in learning text representation. Mean- while, if we take Table 1 into consideration, it is not surprising to see that SGPV can consistently outperform GPV on all the three classification tasks. This also demonstrates that it is more effect- ive to directly fit supervised representation models than to learn a general purpose representation in prediction scenarios."}, {"section_index": "6", "section_name": "REFERENCES", "section_text": "Ng A. Blei, D. and M. Jordan. Latent dirichlet allocation. Journal of Machine Learning Research 3:993-1022, 2003.\nJon D.McAuliffe David M.Blei. Supervised topic models. In Proceedings of Advances in Neural Information Processing Systems. 2007.\nAnna Korhone Felix Hill, Kyunghyun Cho. Learning distributed representations of sentences from unlabelled data. arXiv preprint arXiv:1602.03483, 2016\nZellig. Harris. Distributional structure. Word. 1954\nJames Martens Ilya Sutskever and Geoffrey E Hinton. Generating text with recurrent neural net IOrksnProcoodin neIearnino?011\nRichard Socher Kai Sheng Tai and Christopher D Manning. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of the Association for Computational Linguistics. 2015.\nBo Pang and Lillian Lee. Opinion mining and sentiment analysis. Foundations and trends in in formation retrieval, 2(1-2):1-135, 2008\nZhengdong Lu Han Zhao and Pascal Poupart. Self-adaptive hierarchical sentence model. In IJCAI. 2015.\nT. Hofmann. Probabilistic latent semantic indexing. In Proceedings of the Twenty-Second Annual International S1GIR Conference. 1999. O. Vinyals I. Sutskever and Q. V. Le. Sequence to sequence learning with neural networks. In Proceedings of Advances in Neural Information Processing Systems. 2014..\nHinton Geoffrey E Rumelhart, David E and Williams. Learning representations by back-propagating errors. Nature, 323(6088):533-536, 1986.\nG. W. Furnas Landauer. T. K. S. Deerwester, S. T. Dumais and R. Harshman. Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41, 1990. C. Manning S. Wang. Baselines and bigrams: Simple, good sentiment and topic classification. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. 2012.\nG. Salton and M. McGill. Introduction to Modern Information Retrieval. McGraw-Hill, 1983\nbank. In Proceedings of the Conference on Empirical Methods in Natural Language Processing 2013. Jimmy Lin Aaron Fernandes Stefanie Tellex, Boris Katz and Gregory Marton. Quantitative evalu. ation of passage retrieval algorithms for question answering. In Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Informaion Retrieval. 2003. K. Chen G. S. Corrado T. Mikolov, I. Sutskever and J. Dean. Distributed representations of words and phrases and their compositionality. In Proceedings of Advances in Neural Information Pro- cessing Systems. 2013a.\nGreg Corrado Tomas Mikolov, Kai Chen and Jeffrey Dean. Efficient estimation of word representa tions in vector space. arXiv:1301.3781, 2013b."}] |
r1osyr_xg | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Vector-space representations of words are reported useful and improve the performance of the ma chine learning algorithms for many natural language processing tasks such as name entity recogni-. tion and chunking (Turian et al.|2010), text classification (Socher et al.]2012 Le & Mikolov2014 Kim2014} Joulin et al.2016), topic extraction (Das et al.]2015Li et al.2016), and machine translation (Zaremba et al.|2014) Sutskever et al. 2014).\nPeople are still trying to improve the vector-space representations for words.Bojanowski et al. (2016) attempt to improve word vectors by involving character level information. Other works (Y. & Dredze2014] [Xu et al.[ [2014] Faruqui et al.[2015] Bollegala et al.]2016] try to estimate bette word vectors by using a lexicon or ontology. The idea is simple: because a lexicon or ontology. contains well-defined relations about words, we can use them to improve word vectors.\nHowever, for a polysemous word, one of its synonym does not always mean the same thing with the. original one under different contexts. For example, the word 'point\"' equals 'score\" in \"Team A go. 3 points\", but does not in \"my point of view\" A method to address this issue is to estimate a vector. for each word sense (Huang et al.f2012 Chen et al.[ 2014) or per word type (Neelakantan et al.. 2014). However, it requires additional word sense disambiguation or part-of-speech tagging to use. such word vectors\nIn this paper, we propose a method to improve the vector-space representations using a lexicon and alleviate the adverse effect of polysemy, keeping one vector per word. We estimate the degree of reliability for each paraphrase in the lexicon and eliminate the ones with lower degrees in learn ing. The experimental results show that the proposed method is effective and outperforms the prior works. The major contributions of our work include:"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "We propose a novel approach involving fuzzy sets to reduce the noise brought by polyse mous words in the word vector space when a lexicon is used for learning, and a model to use the fuzzy paraphrase sets to learn the word vector space.\nLexicon Layer Context Words Paraphrases Control Function V * W Hidden Layer Softmax Target Word\nFigure 1: The process flow of the proposed method\nAs described in section 1] whether a polysemous word's paraphrase is the same as the original depends on the context.\nHenceforth, if we simply use all the paraphrases of a word in the lexicon to improve the word vecto without discrimination, they may sometimes bring noise to the vector-space.\nA conventional method for them is to give each word sense a vector. However, such vector-spaces require additional word sense disambiguation in practical use\nHere, we propose a method to alleviate the adverse effects of polysemous words' paraphrases with out word sense disambiguation. Our idea is to annotate each paraphrase with a degree about its reliability, like a member of a fuzzy set. We call such paraphrases as \"fuzzy paraphrases\", and their degrees as the \"memberships.\"\nWe also propose a novel method to jointly learn corpus with a lexicon, in order to use fuzzy para phrases to improve the word vectors.\nIf the meanings of two words are totally the same, they can replace each other in a text without changing the semantic features. Henceforth, we can learn the lexicon by replacing the words in the corpus with its lexical paraphrases.\nWe learn the word vectors by maximizing the probability of a word for a given context, and also. for a generated context where words are replaced by their paraphrases randomly. The memberships of the fuzzy paraphrases are used here to control the probability that the replacements occur by a. control function as shown in Figure1\nAlthough some prior works propose to solve the polysemy problem by estimating one vector per word sense or type, using such word vectors requires additional pre-process Our proposed method keeps one vector per word. It makes the word vectors easier to use in practical terms: it is neither necessary to disambiguate the word senses nor to tag the part-of-speeches before we use the word vectors.\nWe give an introduction of our proposed method in section 2 We show the effects of different para-. phrase sets, parameters, corpus size, and evaluate the effectiveness of our approach by comparing to simpler algorithms in section[3] We compare our approach with the prior works via an evaluation experiment in section|4] We give the findings, conclusions and outlook in section|5.\nLw T logp(wi|wj)+ f(xjk) logp(wi|Wk wiET(i-c)<j<(i+c) Wk ELr\nThe function f(xjk) of the membership xjk is a specified drop-out function. It returns O more fc the paraphrases that have lower memberships, and 1 more for the others..\nLooking for a control function that is easy to train, we notice that if two words are more often to b translated to the same word in another language, the replacement of them are less likely to change the meaning of the original sentence. Thus, we use a function of the bilingual similarity (denoted as Sik) as the membership function:\nf(xjk) ~ Bernoulli(xjk)\nxjk) ~ Bernoulli(xjk Sjk Xj k max jET,kEL\nWe do not need to train f(xk) using the method described above. The model can be trained by negative sampling (Mikolov et al.]2013b): For word wo and a word w1 in its context, denote A1 as the set of the paraphrases for w1 accepted by f(xjk), we maximize log p(wo|w1) by distinguishing the noise words from a noise distribution Pn(w) from wo and its accepted paraphrases in A1 by logistic regression:\nn log p(wo|w1) = logo(VwoVw1 Ew, ~ Pn(w)[logo(-vw,Tvw1)],wi F wo,wi\nHere, Vwo T and Uwi T stand for the transposed matrices of Vwo and Vw, respectively. n is the. number of negative samples used. o(x) is a sigmoid function, o(x) = 1/(1 + e-x)\nWe use enwiki91|mainly for tuning and model exploration. It has a balanced size(1 GB), containing 123,353,508 tokens. It provides enough data to alleviate randomness while it does not take too much time for our model to learn\n' http://mattmahoney.net/dc/enwiki9.zip\nFor a text corpus T, denote w; the ith word in T, c the context window, w; a word in the context window, Lw, the paraphrase set of w; in the lexicon L, wk the kth fuzzy paraphrase in Lw,, and xjk the membership of wk for w;, the objective is\nxjk = g(Sjk).\nWe scale the similarity score of the paraphrase wg to [0, 1] in PPDB2.0 as the memberships, and draw the values of f(x k) from a Bernoulli distribution subjected to them. Denote Sk the similarity score of word w; and wg in PPDB2.0, the value of f(xk) is drawn from the Bernoulli distribution:\nTable 1: The results of 10 times repeated learning and test under each benchmark. The vector-space dimension is set to 100. Enwiki9 is used as the corpus. The maximum, minimum, and the margir of error are marked bold.\nBenchmark SimLex wS353 RW MEN SEM SYN 1 29.41 62.02 38.12 60.00 13.26 27.77 2 29.57 62.49 38.26 60.39 12.70 27.27 3 29.48 61.04 39.90 59.80 13.89 26.94 4 29.52 60.20 39.68 59.81 14.02 27.11 5 28.69 63.45 38.65 60.16 12.94 26.87 6 29.26 61.95 39.13 59.73 13.75 26.60 7 29.46 62.90 39.12 60.45 13.42 25.98 8 28.51 62.96 37.93 59.31 13.58 27.10 9 29.13 62.44 39.91 59.75 13.98 26.89 10 28.59 60.66 38.67 60.24 13.66 26.98 Margin of Error 0.98 2.41 1.98 1.14 1.32 1.79\nWe use ukWaC (Baroni et al.] 2009) to compare with the prior works in section4 But we do no use it for model exploration, because it takes more than 20 hours to learn it, as an enormous corpu. containing 12 GB text."}, {"section_index": "2", "section_name": "3.2 BENCHMARKS", "section_text": "We used several benchmarks. They are Wordsim-353 (ws 353) (Finkelstein et al.[2001) (353 wor pairs), SimLex-999 (SimLex) (Hill et al.l|2016) (999 word pairs), the Stanford Rare Word Similar. ity Dataset (Rw) (Luong et al.]2013) (2034 word pairs), the MEN Dataset (MEN) (Bruni et al.]2014 (3000 word pairs), and the Mikolov's (Google's) word analogical reasoning task (Mikolov et al.. 2013a).\nwS353, SimLex, and Rw are gold standards. They provide the similarity of words labeled b humans. We report the Spearman's rank correlation (p) for them..\nMikolov's word analogical reasoning task is another widely used benchmark for word vectors. It contains a semantic part (SEM), and a syntactic part (SYN). We use the basic way suggested in their paper to find the answer for it: to guess word b' related to b in the way how a' is related to a, the word closest in cosine similarity to a' - a + b is returned as b'.\nWe find that the benchmark scores change every time we learn the corpus, even under the same settings. It is because that the models involve random numbers. Therefore we should consider the margin of error of the changes when we use the benchmarks..\nTo test the margin of error, we firstly used our proposed method to repeat learning enwiki9 for 10 times under the same parameters. Then we tested the vectors under each benchmark, to find the margin of error. In each test, we used the same parameters: the vector dimension was set to 100 for speed, the window size was set to 8, and 25 negative samples were used. The results are shown in Table[1 We use them to analyze the other experimental results later."}, {"section_index": "3", "section_name": "3.3 DIFFERENT TYPES OF PARAPHRASES", "section_text": "In PPDB2.0, there are six relationships for paraphrases. For word X and Y, the different relation. ships between them defined in PPDB2.0 are shown in Table 2] We do not consider the exclusior and independent relations because they are not semantic paraphrases. Those of equivalence are the most reliable because they are the closest ones. But we still want to know whether it is better to tak\nTable 2: Different types of relationships of paraphrases in PPDB2.0(Pavlick et al 2015b a\nRelationship Type Description Equivalence X is the same as Y Forward Entailment X is more specific than/is a type of Y Reverse Entailment X is more general than/encompasses Y Exclusion X is the opposite of Y / X is mutually exclusive with Y OtherRelated X is related in some other way to Y Independent X is not related to Y 37.10 37.05 37.00 36.95 J s ennnnnns*00 I 36.90 36.85 36.80 36.75 36.70 36.65 Equivalence Entailment(Forward+Rev Equivalence+Entailment Equivalence+Entailment+Otl\nthe entailment and the other related paraphrases into consideration. We learn enwiki9 with differen paraphrase sets and use SimLex to evaluate the trained vectors.\nFigure2 compares the performance using different paraphrase sets, tested by SimLex. We can see that it is best to use the equivalence and entailment (forward + reverse) paraphrases together or use only the equivalence paraphrases. Only using the entailment paraphrases is weak. Involving the other related paraphrases deteriorates the performance. We use the Equivalence and Entailment paraphrases in the experiments according to these results."}, {"section_index": "4", "section_name": "3.4 EFFECTS OF PARAMETERS", "section_text": "We use our proposed method to learn enwiki9 under different parameter settings to evaluate the. effects of parameters. We firstly learn enwiki9 under different parameter settings and then test the vectors using SimLex, WS353, RW, MEN, SEM and SYN. We report Spearman's rank correlation p. for SimLex, WS353, RW and MEN, the percentage of correct answers for SEM and SYN..\nFigure 2: The p for SimLex using different paraphrase sets. The corpus is enwiki9. The vector- space dimension is set to 300. The context window size is set to 8. 25 negative samples are used in learning.\n38.00 71.50 39.40 37.50 71.00 39.20 37.00 S s 39.00 70.50 36.50 38.80 70.00 38.60 36.00 69.50 38.40 35.50 69.00 38.20 35.00 38.00 34.50 68.50 37.80 34.00 68.00 37.60 33.50 67.50 37.40 100 200 300 400 500 600 100 200 300 400 500 600 100 200 300 400 500 600 Vector Size Vector Size Vector Size (a) Simlex-999 (SimLex) (b) Wordsim-353 (wS353) (c) Rare Word Similarity Dataset (RW) 74.80 74.00 61.00 74.60 73.00 74.40 60.00 74.20 72.00 59.00 74.00 71.00 73.80 58.00 73.60 70.00 Ceorret 69.00 57.00 73.40 73.20 56.00 68.00 73.00 72.80 67.00 55.00 72.60 66.00 54.00 100 200 300 400 500 600 100 200 300 400 500 600 100 200 300 400 500 600 Vector Size Vector Size Vector Size (d) MEN dataset (MEN) (e) Word Analogical Reasoning. (f) Word Analogical Reasoning. (SEM) (SYN)\nFigure 3: The scores of the benchmarks using different vector-space dimensions. For ws353 SimLex, RW and MEN, we report 100 * p (Spearman's rank correlation). For word analogical rea. soning, we report the percentage of the correct answers. The context window size is set to 8. The number of negative samples is set to 25.."}, {"section_index": "5", "section_name": "3.4.1 EFFECTS OF VECTOR SPACE DIMENSION", "section_text": "We compare the benchmarks using different vector-space dimensions. Figure3|shows the change of each benchmark's scores under different dimensions.\nThe differences in the content of the benchmarks may cause the inconsistence. For example. SimLex rates related but dissimilar words lower than the other word similarity benchmarks (Hill et al.]2016f [Chiu et al.2016]. The results suggest that the best dimensions for our method depends On the task.\nWe compared the benchmarks using different context window sizes. They are shown in Figure 4 Previous works argue that larger window sizes introduce more topic words, and smaller ones em- phasize word functions (Turney2012f|Levy & Goldberg2014a] Levy et al.[2015] Hill et al.[2016 Chiu et al.]2016). Different context window sizes provide different balances between relatedness and similarity. The best window size depends on what we want the vectors to be. We also see that in our results.\nThe relationship between the window size and performance depends on how they rate the pairs. For example, ws 353 rates word pairs according to association rather than similarity (Finkelstein et al. 2001; Hill et al.[[2016). As larger window capture relatedness rather than similarity, the results show\nThe larger vectors do not bring the better performance for most of the benchmarks (except SimLex), although some previous works suggest that the higher dimensions brings better performance for their methods (Pennington et al.|2014f Levy & Goldberg2014b). The curves of SimLex and SYN are gradual. However, there are several abrupt changes in the others. And those of ws353 and RW do not change gradually. The best dimension for different benchmarks is not consistent.\n36.00 71.00 39.00 p 35.50 70.00 38.80 S S s Pemnnan 35.00 69.00 38.60 68.00 34.50 38.40 67.00 34.00 38.20 66.00 IS*001 33.50 65.00 38.00 33.00 64.00 37.80 32.50 63.00 37.60 2 3 4 5 6 7 8 9 10 2345678910 1 2 3 4 5 6 7 8 9 10 Window Size Window Size Window Size (a) Simlex-999 (SimLex) (b) Wordsim-353 (wS353) (c) Rare Word Similarity Dataset (RW) 73.50 70.00 56.00 73.00 55.00 65.00 72.50 54.00 72.00 60.00 53.00 71.50 55.00 52.00 71.00 orreer 51.00 70.50 50.00 70.00 50.00 45.00 69.50 49.00 69.00 40.00 48.00 2 345678910 2. 3 4 5678910 1 ). 4 5 6 8910 Window Size Window Size Window Size (d) MEN dataset (MEN) (e) Word Analogical Reasoning (f) Word Analogical Reasoning (SEM) (SyN)\nUnlike the other word similarity dataset, SimLex rates synonyms high and related dissimilar wor pairs low. Therefore, the smallest window is the most suitable for SimLex because it is best fo capturing the functional similarity.\nThe results of Rw differs from the others (Figure4c). There are many abrupt changes. The bes1. window size is 10. but 1 is better than 2-9. The dataset contains rare words. Because of their lov frequencies, usage of broad context window may be better to draw features for them. However. additional words introduced by larger windows may also deteriorate the vectors of unusual words. For such tasks requiring rare word vectors of high quality, we should be careful in tuning the contex1. window size.\nWe can consider that the best context window size depends on the task, but we should avoid usin. too large window.\nWe also explored the effects of the number of negative samples. The results are shown in Figure5\n2 According to their homepage: http://clic.cimec.unitn.it/ elia.bruni/MEN.html\nFigure 4: The scores of the benchmarks using different context window sizes. For ws353, SimLex, RW and MEN, we report 100 * p (Spearman's rank correlation). For word analogical reasoning, we report the percentage of the correct answers. We use 100-dimension vectors. The number of negative samples is set to 25.\nthat the larger the window is, the better for ws353. The MEN dataset also prefer relatedness than similarity (Bruni et al.2014), but they gave annotators examples involving similarity2 It may be. the reason that the windows larger than 8 deteriorate the benchmarks based on MEN (Figure 4d) The standards of ws353 and MEN to rate the words are similar (Bruni et al.]2014). It leads to. their similar curves (Figure4b|and4d). The worst window sizes of them are also close. When the. window size is set to about 2 or 3, respectively, the balance of similarity and relatedness is the worst. for them.\nFor Google's word analogical tasks (SEM and syn), the questions are quite related to the topic or. domain. For examples, there are questions about the capitals of the countries. They are associated but not synonymous. Therefore a larger window is usually better. However for SyN, using window size 9 is a little better than 10 in Figure|4d|and for MEN 8 is best in Figure|4f It may be because that. if the window is too large, it introduces too many words and reduces the sparsity (Chiu et al.2016)..\n33.80 70.8 39.10 70.6 39.00 33.60 70.4 38.90 38.80 33.40 70.2 38.70 70 69.8 38.60 33.20 38.58 69.6 33.00 38.40 69.4 38.30 32.80 69.2 38.20 69 38.10 32.60 68.8 38.00 5 10 15 2025 30 35 40 5 10 15 20 25 30 35 40 5 10 15 20 25 303540 Number of Negative Samples Number of Negative Samples Number of Negative Samples (a) Simlex-999 (SimLex) (b) Wordsim-353 (ws353) (c) Rare Word Similarity Dataset (RW) 73.20 68.00 55.60 73.00 67.00 66.00 55.40 72.80 65.00 72.60 64.00 55.20 72.40 63.00 55.00 72.20 CCorrer 62.00 Ccorrer 72.00 61.00 54.80 60.00 71.80 59.00 54.60 71.60 58.00 71.40 57.00 54.40 5 10 15 20 25 30 35 40 5 10 15 20 25 30 35 40 5 10 15 20 25 30 35 40 Number of Negative Samples Number of Negative Samples Number of Negative Samples (d) MEN dataset (MEN) (e) Word Analogical Reasoning (f) Word Analogical Reasoning (SEM) (SYN)\nIn Figures5a5c|and5f] we see that overfitting occurs when we use more than 15 negative samples In Figure [5b and Figure[5e it occurs from 25 and 20, respectively. In Figure[5d] the performance does not change very much when we use more than 30 negative samples.\nThe results indicate that too many negative samples may cause overfitting. For 3 of the 6 bench marks, it is best to use 15 negative samples. But we should be careful in practice use because the other different results suggest that the best number depends on the task.\nThe abrupt change at around 15 in Figure 5b|is interesting. wS353 is the smallest dataset among those we used. Because of the small size, the effects of randomness may cause such singularities when the vector-space is not well trained.\nIt is also a good way to show the effects of corpus size by comparing the proposed method to the situations above using corpora in varying size. Therefore we discuss them together in this section.\nWe use text83|together with eEnwiki9 and ukWaC described in section3.1. It is a small corpus containing 100 MB text. To show the difference, we report the benchmarks scores including no only SimLex, but also MEN, and the word analogical task (SEM and SyN). They are the other. benchmarks that are shown relatively solid in section|3.2 The vector-space dimension is set to 300 The context window size is set to 8. 25 negative samples are used in learning. The results are shown. in Figure6\n3http://mattmahoney.net/dc/text8.zip\nFigure 5: The scores of the benchmarks using different numbers of negative samples. For ws353. SimLex, RW and MEN, we report 100 * p (Spearman's rank correlation). For word analogical rea-. soning, we report the percentage of the correct answers. We use 100-dimension vectors. The context window size is set to 8.\nIn this section, we evaluate the effectiveness of our fuzzy approach, by comparing to the situations that set f (x) in Equation (1) as:\nf(x) = 1: It makes the model regard all paraphrases equally. They are all used witho drop-out. f(x) = 0: It makes the model use no paraphrases, equivalent to CBOw..\nFigure 6: The comparison of using the proposed function described in section|2.3] f(x) = 0 (equiv- alent to CBOw) and f(x) = 1 (no drop-out) as the control function. They are compared under different corpora in varying size. The green bar (the left) indicates the scores of the proposed func- tion; the blue bar (the middle) indicates the scores of f(x) = 0; the pink bar (the right) indicates the scores of f(x) = 1. We report 100 * p for SimLex and MEN, the percentage of correct answers for SEM and syN. The vector-space dimension is set to 300. The context window size is set to 8. 25 negative samples are used in learning.\nTherefore, we can see that the proposed control function using the fuzzy paraphrases annotated with the degrees of reliability improves the quality of the learned word vector-space.\nWe compared our work to the prior works using a lexicon to improve word vectors. However, we failed to use the public code to reproduce the works of|Yu & Dredze( (2014) and Bollegala et al. (2016). We also failed to find an available implementation of|Xu et al. (2014). Hence, we use the\n80 Proposed f(x)=0 75 f(x)=1 70 %nnmney 65 60 eooret 55 50 \\ d*00I 45 40 35 30 Text8 (100 MB) Enwiki9 (1 GB) UkWaC (12 GB)\nThe proposed function outperforms the others for SimLex and MEN under text8, for all the benchmarks under enwiki9, for SimLex, SEM and SYN under ukWaC. The proposed function is always better than f(x) = 1 in the experiments, no matter what the benchmark is or how big the corpus is. For SEm, the proposed function is weaker than f(x) = 0 under text8, slightly better under enwiki9, and obviously outperforms f(x) = 0 under ukWaC. As the proposed function out- performs under larger corpora, the relatively low scores under text8 may be caused by the effects of randomness: the proposed function involves random numbers; they bring huge instability under such tiny corpora. Another possible reason is that the control function is less useful for text8 because there are few polysemous words in the tiny corpus. There is no advantages to use f(x) = 1 instead of f(x) = 0 for both text8 and enwiki9 It shows that learning the context words replaced by paraphrases may be not a good idea without fuzzy approaches. However, if we use the proposed control function, the results are better and go beyond those of f(x) = 0 in most tests. It shows that the control function utilizing fuzzy paraphrases improves the performance.\nTable 3: Comparison to the prior Works. The scores of the prior works under ukWaC are fron Bollegala et al.(2016). The SYN score of ours and Bollegala's are marked as best together becaus the margin of error is 1.79 as shown in Table|1.\nThe MEN Dataset (MEN): Word Analogical Reasoning Task (SEM and SYN)\nRubenstein-Goodenough dataset (RG) (Rubenstein & Goodenough]1965) is also used in their work. However, we do not use it, because it fails the sanity check in Batchkarov et al.(2016): p ma increase when noise is added.\nWe use ukWaC to learn the word vectors, the same with Bollegala et al.(2016). We also use the. same parameters with the prior works: The vector-space dimension is set to 300; the context window. size is set to 8; the number of negative samples is set to 25. Then we calculate the cosine similarity of the words and report 100 * p for Men. We use the add method described in section3.2|and repor. the percentage of correct answers, for the word analogical reasoning task..\nTable 3 shows the results of the experiments. The of MEN and sEM is 0.86 and 0.44 as shown in Table|1] Therefore we see that our proposed method outperforms the prior works under these benchmarks. We consider our score for syN is as good asBollegala et al.(2016) achieved, and better than the others, because its margin of error is 1.79 as shown in Table|1."}, {"section_index": "6", "section_name": "S CONCLUSION & THE FUTURE WORKS", "section_text": "We proposed a fuzzy approach to control the contamination caused by the polysemous words wher a lexicon is used to improve the vector-space word representations. We annotate each paraphrase oi a word with a degree of reliability, like the members of a fuzzy set with their memberships, on the basis of their multilingual similarities to the original ones. We use the fuzzy paraphrases to learr a corpus by jointly learning a generated text, in which the original words are randomly replacec by their paraphrases. A paraphrase is less likely to be put into the generated text if it has lowe reliability than the others, and vice versa.\nWe tested the performance using different types of paraphrases in the lexicon PPDB2.0 and find that it is best to use the equivalence type and the entailment type. Using other related paraphrases deteriorates the performance\nWe explored the effects of parameters. We find that the best parameter setting depends on the tas We should tune the model carefully in practical use.\nWe evaluated the effectiveness of our approach by comparing it to the situations that simpler func. tions are used to control replacements: f(x) = 1 which accepts all, and f(x) = 0 which rejects\nMEN SEM SYN 76.99 67.48 67.89 70.90 61.46 69.33 50.10 29.90 32.64 43.46 37.07 40.06 34.36 44.42 60.50 36.65 52.50 1) 65.70 45.29 65.65\nsame corpus and benchmarks withBollegala et al.(2016) and compare our results with the reported scores of the prior works in their paper. The benchmarks are:.\nOur proposed method also achieved the top scores. pared with the prior works.\nThe fuzzy paraphrases can also be employed for the other models with some changes. We are going to show it in the future. The proposed idea for the polysemy problem without word sense. disambiguation is meaningful especially for practical use because it saves the effort of part-of-speech tagging and word sense disambiguation..\nBesides, the control function may be more accurate if it considers all the context. We are also going to work on it in the future\nWe have opened the source of a demo of the"}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Marco Baroni. Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. The wacky wide web: a collection of very large linguistically processed web-crawled corpora. Language resources and evaluation, 43(3):209-226, 2009.\nMiroslav Batchkaroy, Thomas Kober, Jeremy Reffin, Julie Weeds, and David Weir. A critique of word similarity as a method for evaluating distributional semantic models. the 54th annual meeting of the Association for Computational Linguistics (ACL 2016), pp. 7, 2016.\nPiotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. Enriching word vectors with subword information. arXiv preprint arXiv:1607.04606, 2016.\nDanushka Bollegala, Alsuhaibani Mohammed, Takanori Maehara, and Ken-Ichi Kawarabayashi Joint word representation learning using a corpus and a semantic lexicon. In Proceedings of the 30th AAAI Conference on Artificial Intelligence (AAAI'16). 2016.\nElia Bruni, Nam Khanh Tran, and Marco Baroni. Multimodal distributional semantics. J. Artif. Int Res., 49(1):1-47, January 2014\nXinxiong Chen, Zhiyuan Liu, and Maosong Sun. A unified model for word sense representation and disambiguation. In Proceedings of the conference on empirical methods in natural language processing (EMNLP), pp. 1025-1035. Citeseer, 2014.\nRajarshi Das, Manzil Zaheer, and Chris Dyer. Gaussian lda for topic models with word embedding In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics an the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers Association for Computational Linguistics, 2015.\nManaal Faruqui, Jesse Dodge, Sujay K. Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith Retrofitting word vectors to semantic lexicons. In Proceedings of NAACL. 2015.\n4https://github.com/huajianjiu/Bernoulli-CBOFP\nall. We also repeated the experiments under a tiny, a medium sized, and a large corpus, to see the effects of the corpus size on the effectiveness. Our approach achieves the best in 3 of 4 benchmarks under the tiny corpus, and in all benchmarks under the medium sized and the large one. The results indicate that our approach is effective to improve the word vectors.\nUnlike the previous works that solve the problems about polysemy by estimating a vector for each word sense or word type, our approach keeps one vector per word. It makes the word vectors easier to use in practical terms: it is neither necessary to disambiguate the word senses nor to tag the part-of-speeches before we use the word vectors.\nLev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. Placing search in context: The concept revisited. In Proceedings of the 1Oth international conference on World Wide Web, pp. 406-414. ACM, 2001..\nJuri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. PPDB: The paraphras database. In Proceedings of NAACL-HLT, pp. 758-764, Atlanta, Georgia, June 2013. Associ ation for Computational Linguistics.\nEric H Huang, Richard Socher, Christopher D Manning, and Andrew Y Ng. Improving word rep resentations via global context and multiple word prototypes. In Proceedings of the 5Oth Annua Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, pp. 873-882 Association for Computational Linguistics, 2012.\nYoon Kim. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Lar. Processing (EMNLP). 2014\nMinh-Thang Luong, Richard Socher, and Christopher D. Manning. Better word representations with recursive neural networks for morphology. In CoNLL, Sofia, Bulgaria, 2013\nTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word represer tations in vector space. In ICLR Workshop, 2013a.\nTomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed represen tations of words and phrases and their compositionality. In Advances in neural information pro cessing systems, pp. 3111-3119, 2013b.\nJeffrey Pennington, Richard Socher, and Christopher D. Manning. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pp. 1532-1543 2014.\nFelix Hill, Roi Reichart, and Anna Korhonen. Simlex-999: Evaluating semantic models with (gen. uine) similarity estimation. Computational Linguistics, 2016.\nDmer Levy and Yoav Goldberg. Dependencybased word embeddings. In the 5Znd Annual Meeting of the Association for Computational Linguistics (ACL 2014), 2014a. Omer Levy and Yoav Goldberg. Neural word embedding as implicit matrix factorization. In Pro- ceedings of the 27th International Conference on Neural Information Processing Systems, Ad- vances in Neural Information Processing Systems 27 (NIPS 2014), pp. 2177-2185, Cambridge, MA, USA, 2014b. MIT Press. Omer Levy, Yoav Goldberg, and Ido Dagan. Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics, 3:211- 225, 2015. Shaohua Li, Tat-Seng Chua, Jun Zhu, and Chunyan Miao. Generative topic embedding: a continuous representation of documents. In the 54th annual meeting of the Association for Computational Linguistics (ACL 2016). Association for Computational Linguistics, 2016.\nArvind Neelakantan, Jeevan Shankar, Alexandre Passos, and Andrew McCallum. Efficient non- parametric estimation of multiple embeddings per word in vector space. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1059 1069, Doha, Qatar, October 2014. Association for Computational Linguistics."}, {"section_index": "8", "section_name": "Herbert Rubenstein and John B Goodenough. Contextual correlates of synonymy. Communication. of the ACM, 8(10):627-633, 1965.", "section_text": "Peter D. Turney. Domain and function: A dual-space model of semantic relations and compositions J. Artif. Int. Res.. 44(1):533-585. May 2012. ISSN 1076-9757\nMo Yu and Mark Dredze. Improving lexical embeddings with semantic knowledge. In the 52n Annual Meeting of the Association for Computational Linguistics (ACL 2014), pp. 545-550. As sociation for Computational Linguistics, 2014..\nWojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization CoRR, abs/1409.2329. 2014\nRichard Socher, Brody Huval, Christopher D Manning, and Andrew Y Ng. Semantic composi tionality through recursive matrix-vector spaces. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pp. 1201-1211. Association for Computational Linguistics, 2012.\nChang Xu, Yalong Bai, Jiang Bian, Bin Gao, Gang Wang, Xiaoguang Liu, and Tie- Yan Liu. Rc-net: A general framework for incorporating knowledge into word representations. In Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Manage- ment, pp. 1219-1228. ACM, 2014."}] |
S1QefL5ge | [{"section_index": "0", "section_name": "ONLINE STRUCTURE LEARNING FOR SUM-PRODUCT NETWORKS WITH GAUSSIAN LEAVES", "section_text": "Wilson Hsu. Agastva Kalra & Pascal Poupart\nDavid R. Cheriton School of Computer Science University of Waterloo.\nwwhsu,a6kalra,ppoupart}@uwaterloo.ca"}, {"section_index": "1", "section_name": "17 INTRODUCTION", "section_text": "Sum-product networks (SPNs) were first introduced byPoon & Domingos(2011) as a new type of deep representation. They distinguish themselves from other types of neural networks by several desirable properties:\nThere is a catch: these nice properties arise only when the structure of the network satisfies certain. conditions (i.e., decomposability and completeness) (Poon & Domingos2011). Hence, it is not. easy to specify sum-product networks by hand. In particular, fully connected networks typically violate those conditions. Similarly, most sparse structures that are handcrafted by practitioners to compute specific types of features or embeddings also violate those conditions. While this may seem. like a major drawback, the benefit is that researchers have been forced to develop structure learning techniques to obtain valid SPNs that satisfy those conditions (Dennis & Ventura2012) Gens & Domingos]2013] Peharz et al.]2013] Lee et al.][2013] Rooshenas & Lowd2014]Adel et al.[2015 Vergari et al.[2015] Rahman & Gogate[2016] Mazen Melibari[2016). At the moment, the search for good network structures in other types of neural networks is typically done by hand based on. intuitions as well as trial and error. However the expectation is that automated structure learning techniques will eventually dominate. For this to happen, we need structure learning techniques that can scale easily to large amounts of data..\nTo that effect, we propose the first online structure learning technique for SPNs with Gaussian leaves The approach starts with a network structure that assumes that all variables are independent. This network structure is then updated as a stream of data points is processed. Whenever a statisticall significant correlation is detected between some variables. a correlation is introduced in the networl in the form of a multivariate Gaussian or a mixture distribution. This is done while ensuring tha the resulting network structure is necessarily valid. The approach is evaluated on several large benchmark datasets."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Sum-product networks have recently emerged as an attractive representation due to their dual view as a special type of deep neural network with clear semantics and a special type of probabilistic graphical model for which inference is always tractable. Those properties follow from some conditions (i.e., completeness and decomposability) that must be respected by the structure of the network. As a result, it is not easy to specify a valid sum-product network by hand and therefore structure learning techniques are typically used in practice. This paper describes the first online structure learning technique for continuous SPNs with Gaussian leaves. We also introduce an accompanying new parameter learning technique.\n1. The quantities computed by each node can be clearly interpreted as (un-normalized) prob abilities. 2. SPNs are equivalent to Bayesian and Markov networks (Zhao et al.2015) while ensuring that exact inference has linear complexity with respect to the size of the network. 3. They represent generative models that naturally handle arbitrary queries with missing data while changing which variables are treated as inputs and outputs."}, {"section_index": "3", "section_name": "2 BACKGROUND", "section_text": "Sum-product networks (SPNs) were first proposed byPoon & Domingos(2011) as a new type o1. deep architecture consisting of a rooted acyclic directed graph with interior nodes that are sums anc products while the leaves are tractable distributions, including Bernoulli distributions for discrete SPNs and Gaussian distributions for continuous SPNs. The edges emanating from sum nodes are. labeled with non-negative weights w. An SPN encodes a function f(X = x) that takes as input a variable assignment X = x and produces an output at its root. This function is defined recursively. at each node n as follows:.\nPr(Xn =Xn) if isLeaf(n) fn(X=x) = i Wifchildi(n)(x) if isSum(n) I; fchild;(n)(x) if isProduct(n\nHere, Xn = xn denotes the variable assignment restricted to the variables contained in the leaf n If none of the variables in leaf n are instantiated by X = x then Pr(Xn = xn) = Pr(0) = 1. Note also that if leaf n contains continuous variables, then Pr(Xn = xn) should be interpreted as pdf(Xn = xn):\nAn SPN is a neural network in the sense that each interior node can be interpreted as computing a linear combination of its children followed by a potentially non-linear activation function. Without loss of generality, assume that the SPN is organized in alternating layers of sums and product nodes It is easy to see that sum-nodes compute a linear combination of their children. Product nodes can be interpreted as the sum of its children in the log domain. Hence sum-product networks can be viewed as neural networks with logarithmic and exponential activation functions.\nAn SPN can also be viewed as encoding a joint distribution over the random variables in its leaves when the network structure satisfies certain conditions. These conditions are often defined in terms of the notion of scope\nDefinition 1 (Scope). The scope(n) of a node n is the set of variables that are descendants of n\nHere decomposability allows us to interpret product nodes as computing factored distributions with. respect to disjoint sets of variables, which ensures that the product is a valid distribution over the. union of the scopes of the children. Similarly, completeness allows us to interpret sum nodes as computing a mixture of the distributions encoded by the children since they all have the same scope. Each child is a mixture component with mixture probability proportional to its weight. Hence, in. complete and decomposable SPNs, the sub-SPN rooted at each node can be interpreted as encoding. an (un-normalized) joint distribution over its scope. We can use the function f to answer inference. queries with respect to the joint distribution encoded by the entire SPN as follows:.\nfroot(X=x) Marginal queries: Pr(X = x) = froot (0) froot(X=x,Y=y) Conditional queries: Pr(X = x|Y = y) fn otY=v\nfroot(X=x) Marginal queries: Pr(X = x) = froot(0) froot(X=x,Y=y) Conditional queries: Pr(X = x|Y = y) = froot(Y=y)\nUnlike most neural networks that can answer only queries with fixed inputs and outputs, SPNs can. answer conditional inference queries with varying inputs and outputs simply by changing the set of\n1Consecutive sum nodes can always be merged into a single sum node. Similarly, consecutive product nodes can always be merged into a single product node.\nThe paper is structured as follows. Section2 provides some background about sum-product net-. works. Section 3 describes our online structure learning technique for SPNs with Gaussian leaves Section|4[evaluates the performance of our structure learning technique on several large benchmark datasets. Finally, Section 5|concludes the paper and discusses possible directions for future work\nvariables that are queried (outputs) and conditioned on (inputs). Furthermore, SPNs can be used tc generate data by sampling from the joint distributions they encode. This is achieved by a top-dowi pass through the network. Starting at the root, each child of a product node is followed, a singl child of a sum node is sampled according to the unnormalized distribution encoded by the weight of the sum node and a variable assignment is sampled in each leaf that is reached. This is particularl useful in natural language generation tasks and image completion tasks (Poon & Domingos 2011)\nNote also that inference queries can be answered exactly in linear time with respect to the size of the network since each query requires two evaluations of the network function f and each evaluation is performed in a bottom-up pass through the network. This means that SPNs can also be viewed as a special type of tractable probabilistic graphical model, in contrast to Bayesian and Markov networks for which inference is #P-hard (RothJ 1996). Any SPN can be converted into an equivalent bipartite Bayesian network without any exponential blow up, while Bayesian and Markov networks can be converted into equivalent SPNs at the risk of an exponential blow up (Zhao et al.|2015)."}, {"section_index": "4", "section_name": "2.1 PARAMETER LEARNING", "section_text": "The weights of an SPN are its parameters. They can be estimated by maximizing the likelihood of a dataset (generative training) (Poon & Domingos 2011) or the conditional likelihood of some output features given some input features (discriminative training) by Stochastic Gradient Descent (SGD) (Gens & Domingos2012). Since SPNs are generative probabilistic models where the sum nodes can be interpreted as hidden variables that induce a mixture, the parameters can also be es- timated by Expectation Maximization (EM) (Poon & Domingos 2011) Peharz, 2015).Zhao & Poupart(2016) provides a unifying framework that explains how likelihood maximization in SPNs corresponds to a signomial optimization problem where SGD is a first order procedure, one can also consider a sequential monomial approximation and EM corresponds to a concave-convex procedure that converges faster than the other techniques. Since SPNs are deep architectures, SGD and EM suffer from vanishing updates and therefore \"hard\"' variants have been proposed to remedy to this problem (Poon & Domingos!2011f Gens & Domingos2012). By replacing all sum nodes by max nodes in an SPN, we obtain a max-product network where the gradient is constant (hard SGD) and latent variables become deterministic (hard EM). It is also possible to train SPNs in an online fash- ion based on streaming data (Lee et al.]2013] Rashwan et al.]2016}Zhao et al.]2016} Jaini et al. 2016). In particular, it was shown that online Bayesian moment matching (Rashwan et al.2016 Jaini et al. 2016) and online collapsed variational Bayes (Zhao et al.2016) perform much better than SGD and online EM."}, {"section_index": "5", "section_name": "2.2 STRUCTURE LEARNING", "section_text": "Since it is difficult to specify network structures for SPNs that satisfy the decomposability and com. pleteness properties. several automated structure learning techniques have been proposed (Dennis &Ventura 2012}Gens & Domingos 2013] Peharz et al.]2013f Lee et al.]2013 Rooshenas & Lowd2014Adel et al.2015Vergari et al.2015Rahman & Gogate2016 Mazen Melibari 2016). The first two structure learning techniques (Dennis & Ventura2012) Gens & Domingos. 2013) are top down approaches that alternate between instance clustering to construct sum nodes. and variable partitioning to construct product nodes. We can also combine instance clustering and. variable partitioning in one step with a rank-one submatrix extraction by performing a singular value decomposition (Adel et al.|[2015). Alternatively, we can learn the structure of SPNs in a bottom-up. fashion by incrementally clustering correlated variables (Peharz et al.] 2013). These algorithms all. learn SPNs with a tree structure and univariate leaves. It is possible to learn SPNs with multivariate. leaves by using a hybrid technique that learns an SPN in a top down fashion, but stops early and. constructs multivariate leaves by fitting a tractable probabilistic graphical model over the variables. in each leaf (Rooshenas & Lowd2014] Vergari et al.]2015). It is also possible to merge similar. subtrees into directed acyclic graphs in a post-processing step to reduce the size of the resulting. SPN (Rahman & Gogate2016). Furthermore,Mazen Melibari(2016) proposed dynamic SPNs for variable length data and described a search-and-score structure learning technique that does a local. search over the space of network structures..\nSo far, all these structure learning algorithms are batch techniques that assume that the full datasel is available and can be scanned multiple times.. Lee et al. (2013) describes an online structure\nlearning technique that gradually grows a network structure based on mini-batches. The algorithm is a variant of LearnSPN (Gens & Domingos2013) where the clustering step is modified to use online clustering. As a result, sum nodes can be extended with more children when the algorithm encounters a mini-batch that is better clustered with additional clusters. Product nodes are never modified after their creation.\nSince existing structure learning techniques have all been designed for discrete SPNs and have yet to be extended to continuous SPNs such as Gaussian SPNs, the state of the art for continuous (and large scale) datasets is to generate a random network structure that satisfies decomposability and completeness after which the weights are learned by a scalable online learning technique (Jaini et al.l2016). We advance the state of the art by proposing a first online structure learning technique for Gaussian SPNs."}, {"section_index": "6", "section_name": "PROPOSED ALGORITHM", "section_text": "Suppose we want to model a probability distribution over a d-dimensional space.. Theal gorithm starts with a fully factorized joint probability distribution over all variables, p(x). p(x1, x2,..., xd) = p1(x1)p2(x2) ..:pd(xd). This distribution is represented by a product node. with d children, the ith of which is a univariate distribution over the variable x. Therefore, ini- tially we assume that the variables are independent, and the algorithm will update this probability. distribution as new data points are processed.\nupdating the parameters of the SPN, and updating the structure of the network\nThe parameters are updated by keeping track of running sufficient statistics. There are two types of parameters in the model: weights on the branches under a sum node, and parameters for the Gaussian distribution in a leaf node\nWe propose a new online algorithm for parameter learning that is simple while ensuring that aftei each update, the likelihood of the last processed data point is increased (similar to stochastic gradient ascent). Algorithm 1|describes the pseudocode of this procedure. Every node in the network has a count, nc, initialized to 1. When a data point is received, the likelihood of this data point is computed at each node. Then the parameters of the network are updated in a recursive top-down fashion by starting at the root node. When a sum node is traversed, its count is increased by 1 and the count of the child with the highest likelihood is increased by 1. This effectively increases the weight of the child with the highest likelihood while decreasing the weights of the remaining children. As a result, the overall likelihood at the sum node will increase. The weight ws.c of a branch between a sum node s and one of its children c can then be estimated as.\nwhere ns is the count of the sum node and nc is the count of the child node. We also recursively update the subtree of the child with the highest likelihood. In the case of ties, we simply choose one of the children with highest likelihood at random to be updated..\nSince there are no parameters associated with a product node, the only way to increase its likelihood. is to increase the likelihood at each of its children. We increment the count at each child of a product node and recursively update the subtrees rooted at each child..\nSince each leaf node represents a Gaussian distribution, it keeps track of the empirical mean vector u and empirical covariance matrix for the variables in its scope. When a leaf node with a current\nIn this work, we assume that the leaf nodes all have Gaussian distributions. A leaf node may have. more than one variable in its scope. in which case it follows a multivariate Gaussian distribution\nGiven a mini-batch of data points, the algorithm passes the points through the network from the root to the leaf nodes and updates each node along the way. This update includes two parts:\nnc Ws,c ns\nInput: SPN and m data points Output: SPN with updated parameters nroot+ nroot + m if isProduct(root) then for each child of root do. parameterUpdate(child, data end for else if isSum(root) then. for each child of root do. subset < {x E data | likelihood(child, x) likelihood(child', x) Vchild' of root} parameterUpdate(child, subset. nchild+1 Wroot,child nroot+#children end for else if isLea f (root) then. update mean (root) based on Eq.3 update covariance matrix (root) based on Eq.4 end if\ncount of n receives a batch of m data points x(1), x(2) , x(m), the empi covariance are updated according to the equations:\nm 1 ni, k -i-j) n+m\nwhere i and j index the variables in the leaf node's scope, and ' and ' are the new mean and covariance after the update\nThis parameter update technique is related to, but different from hard SGD and hard EM use in (Poon & Domingos) 2011 Gens & Domingos2012] Lee et al.]2013].Hard SGD and harc EM also keep track of a count for the child of each sum node and increment those counts each time a data point reaches this child. However, to decide when a child is reached by a data point, they replace all descendant sum nodes by max nodes and evaluate the resulting max-product network. Ir contrast, we retain the descendant sum nodes and evaluate the original sum-product network as it is This evaluates more faithfully the probability that a data point is generated by a child.\nAlg.1does a single pass through the data. The complexity of updating the parameters after each data point is linear in the size of the network (i.e., # of edges) since it takes one bottom up pass to compute the likelihood of the data point at each node and one top-down pass to update the sufficient statistics and the weights. The update of the sufficient statistics can be seen as locally maximizing the likelihood of the data. The empirical mean and covariance of the Gaussian leaves locally increase the likelihood of the data that reach that leaf. Similarly, the count ratios used to set the weights under a sum node locally increase the likelihood of the data that reach each child. We prove this result below.\nTheorem 1. Let 0, be the set of parameters of an SPN s, and let fs(|0s) be the probability density. function of the SPN. Given an observation x, suppose the parameters are updated to 0' based on the running average update procedure, then we have fs(x[0s) fs(x[0s).\nProof. We will prove the theorem by induction. First suppose the SPN is just one leaf node. In this case, the parameters are the empirical mean and covariance, which is the maximum likelihood estimator for Gaussian distribution. Suppose 0 consists of the parameters learned using n data x(n), and 0' consists of the parameters learned using the same n data points and an points x(1)\n1 m nui+ n+m k=1\nn fs(x|0')] f(x(i)|0') fs(x|0s) fs(x(i)|0s) fs(x|0s) fs(x(i)|0s i=1 i=1 i=1\n1 fu(x|0u)+ntft(x n + 1 fu(x|0u) + ntft(x 0+ by inductive hypothe n+1 1 t ft(x|0t)+ ntfi n+1 n NtIt x|0+ n fs(x|0s)\nThe simple online parameter learning described above can be easily extended to enable online struc ture learning. Algorithm 2 describes the pseudocode of the resulting procedure called oSLRAU (online Structure Learning with Running Average Update). Similar to leaf nodes, each product node also keeps track of the empirical mean vector and empirical covariance matrix of the variables in it scope. These are updated in the same way as the leaf nodes.\nInitially, when a product node is created, all variables in the scope are assumed independent (see. Algorithm5). As new data points arrive at a product node, the covariance matrix is updated, and if. the absolute value of the Pearson correlation coefficient between two variables are above a certain threshold, the algorithm updates the structure so that the two variables become correlated in the. model.\nWe correlate two variables in the model by combining the child nodes whose scopes contain the two variables. The algorithm employs two approaches to combine the two child nodes:\ncreate a multivariate leaf node (Algorithm4), or create a mixture of two components over the variables (Algorithm3\nAnother way to correlate x1 and x3 is to create a mixture, as shown in the right part of Figure[1 The mixture has two components. The first component contains the original children of the product node that contain x1 and x3. The second component is a new product node, which is again initialized to have a fully factorized distribution over its scope (Alg. 5). The mini-batch of data points are then passed down the new mixture to update its parameters.\nThese two processes are depicted in Figure1 On the left, a product node with scope x1, ..., X5. originally has three children. The product node keeps track of the empirical mean and empirical covariance for these five variables. Suppose it receives a mini-batch of data and updates the statistics As a result of this update, x1 and x3 now have a correlation above the threshold..\nFigure [1illustrates the two approaches to model this correlation. In the middle of Figure [1] the algorithm combines the two child nodes that have x1 and x3 in their scope, and turns them into a multivariate leaf node. Since the product node already keeps track of the mean and covariance of these variables, we can simply use those statistics as the parameters for the new leaf node\nChild3 X4, X5 Child2 Child3 Child3 Child1 Leaf X1, X2 X3 X4,X5 X1, X2, X3 X4, X5 Child1 Child2 Leaf Leaf Leaf X1, X2 X3 X1 X2 X3\nFigure 1: Depiction of how correlations between variables are introduced in the model. Left: original product node with three children. Middle: combine Child1 and Child2 into a multivariate leaf node. (Alg.4). Right: create a mixture to model the correlation (Alg.3).\nNote that although the children are drawn like leaf nodes in the diagrams, they can in fact be entire subtrees. Since the process does not involve the parameters in a child, it works the same way if some of the children are trees instead of single nodes..\nThe technique chosen to induce a correlation depends on the number of variables in the scope. The. algorithm creates a multivariate leaf node when the combined scope of the two child nodes has a. number of variables that does not exceed some threshold and if the total number of variables in the problem is greater than this threshold, otherwise it creates a mixture. Since the number of parameters. in multivariate Gaussian leaves grows at a quadratic rate with respect to the number of variables, i. is not advised to consider multivariate leaves with too many variables. In contrast, the mixture. construction increases the number of parameters at a linear rate, which is less prone to overfitting. when many variables are correlated.\nTo simplify the structure, if a product node ends up with only one child, it is removed from the network, and its only child is joined with its parent. Similarly, if a sum node ends up being a child of another sum node, then the child sum node can be removed, and all its children are promoted one layer up.\nOur algorithm (oSLRAU) is related to, but different from the online structure learning technique proposed byLee et al.[(2013). Lee et al.'s technique was applied to discrete datasets while oSLRAl learns SPNs with Gaussian leaves based on real-valued data. Furthermore, Lee et al's technique incrementally constructs a network in a top down fashion by adding children to sum nodes by onlin clustering. Once a product node is constructed, it is never modified. In contrast, oSLRAU incremen tally constructs a network in a bottom up fashion by detecting correlations and modifying produc nodes to represent these correlations. Finally, Lee et al.'s technique updates the parameters by harc EM (which implicitly works with a max-product network) while oSLRAU updates the parameters by Alg.[1|(which retains the original sum-product network) as explained in the previous section.\nNote that the this structure learning technique does a single pass through the data and therefore is. entirely online. The time and space complexity of updating the structure after each data point is. linear in the size of the network (i.e., # of edges) and quadratic in the number of features (since. product nodes store a covariance matrix that is quadratic in the size of their scope). The algorithm. also ensures that the decomposability and completeness properties are preserved after each update\np(x1,x2,x3) = [0.25N(x1|1,1)N(x2|2,2) + 0.25N(x1|11,1)N(x2|12,2] + 0.25N(x1|21,1)N(x2|22,2) + 0.25N(x1|31,1)N(x2|32,2)|N(x3|3\nwhere N(|, o2) is the normal distribution with mean and variance o2\nTherefore, the first two dimensions x1 and x2 are generated from a Gaussian mixture with fou components, and x3 is independent from the other two variables\nStarting from a fully factorized distribution, we would expect x3 to remain factorized after learn ing from data. Furthermore, the algorithm should generate new components along the first two dimensions as more data points are received since x1 and x2 are correlated\nThis is indeed what happens. Figure 2|shows the structure learned after 200 and 500 data points. The variable x3 remains factorized regardless of the number of data points seen, whereas more components are created for x1 and x2 as more data points are processed..\nAlgorithm 3 createMixture(root, child1, child2) Input: SPN and two children to be merged Output: new mixture model remove child1 and child, from root. component1 create product node add child1 and child2 as children of component1. ncomponent1 Nroot jointScope scope(child1) U scope(child2) 'jointScope,jointScope component2 createFactoredModel(jointScope) ncomponent2 + 0 mixture create sum node add component1 and component2 as children of mixture. nmixture+ nroot ncomponent+1 Wmixture,component1 nmixture+2 ncomponent2+1 Wmixture,component nmixture+2 add mixture as child of root return root Algorithm 4 createMultiV arGaussian(root, child1, child2) Input: SPN, two children to be merged and data. Output: new multivariate Gaussian create multiV arGaussian jointScope {scope(child1) U scope(child2)} ,(root) (multiVarGaussian) (root) jointScope,jointScope nmultiVarGaussian + nroot return multiV arGaussian Algorithm 5 createFactoredM odel(scope) Input: scope (set of variables) Output: fully factored SPN f actoredModel create product node for each i E scope do end for (factoredModel) 0 n f actoredModel 0 return factoredModel\nAlgorithm 3 createMixture(root, cnild1, cnild2 Input: SPN and two children to be merged. Output: new mixture model remove child1 and child2 from root. component1 create product node add child1 and child2 as children of component1. ncomponent1 Nroot jointScope scope(child1) U scope(child2) (component1) (root) 'jointScope,joint Scope component2 createFactoredModel(jointScope) ncomponent2 + 0 mixture create sum node add component1 and component2 as children of mixture. Nmixture+ Nroot ncomponent1+1 Wmixture,component1+ nmixture+2 Ncomponent2+1 Wmixture,component2 nmixture+2 add mixture as child of root. return root\nAlgorithm 4 createM ultiV arGaussian(root, child1, child)\nFigure|3|shows the data points along the first two dimensions and the Gaussian components learned We can see that the algorithm generates new components to model the correlation between x1 and x2 as it processes more data."}, {"section_index": "7", "section_name": "4.2 COMPARISON TO OTHER ALGORITHMS", "section_text": "In a second experiment, we compare our algorithm to several alternatives on the same datasets used. by Jaini et al.[(2016). We use 0.1 as the correlation threshold in all experiments, and we use mini batch sizes of 1 for the three datasets with fewest instances (Quake, Banknote, Abalone), 8 for the two slightly larger ones (Kinematics, CA), and 256 for the two datasets with most instances (Flow Size, Sensorless).\nFigure 2: Learning the structure from the toy dataset using univariate leaf nodes. Left: after 200 data points. Right: after 500 data points.\n30 30 25 25 20 20 15 15 10 10 5 5 0 O 0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35 X2 X2\nFigure 3: Blue dots are the data points from the toy dataset, and the red ellipses show the diagonal Gaussian components learned. Left: after 200 data points. Right: after 500 data points..\nX3 X3 + N(3, 3) + N(3, 3) 0.256 0.232 0.64 0.36 0.256 0.256 X X1 X2 X1 X2 X1 X2 X1 X2 X1 X2 X1 X2 N(9, 58) N(10, 57 N(28, 25) N10,24 N(1, 3) N(2, 4) N(11, 6) N(12, 6) N(22, 9) N(23, 8) N(31, 1) N(32, 2)\nTable 1: Average log-likelihood scores with standard error on small real-world data sets. The best results are highlighted in bold. (random) indicates a random network structure and (GMM) indicates a fixed network structure corresponding to a Gaussian mixture model..\nDataset Flow Size Quake Banknote Abalone Kinematics CA Sensorless # of vars 3 4 4 8 8 22 48 oSLRAU 14.78 -1.86 -2.04 -1.12 -11.15 17.10 54.82 0.97 0.20 0.15 0.21 0.03 1.36 1.67 oBMM -1.82 -11.19 -2.47 1.58 (random) 0.19 0.03 0.56 1.28 oEM -11.36 -11.35 -31.34 -3.40 (random) 0.19 0.03 1.07 6.06 oBMM 4.80 -3.84 -4.81 -1.21 -11.24 -1.78 (GMM) 0.67 0.16 0.13 0.36 0.04 0.59 oEM -0.49 -5.50 -4.81 -3.53 -11.35 -21.39 (GMM) 3.29 0.41 0.13 1.68 0.03 1.58 SRBM -0.79 -2.38 -2.76 -2.28 -5.55 -4.95 -26.91 0.004 0.01 0.001 0.001 0.02 0.003 0.03 GenMMN 0.40 -3.83 -1.70 -3.29 -11.36 -5.41 -29.41 0.007 0.21 0.03 0.10 0.02 0.14 1.16\nThe network structures for GenMMNs and SRBMs are fully connected while ensuring that th number of parameters is comparable to those of the SPNs. oSLRAU outperforms these models on 5 datasets while SRBMs and GenMMNs each outperform oSLRAU on one dataset. Althougl SRBMs and GenMMNs are more expressive than SPNs since they allow other types of nodes beyonc sums and products, training GenMMNs and SRBMs is notoriously difficult. In contrast, oSLRAL provides a simple and effective way of optimizing the structure and parameters of SPNs that capture. well the correlations between variables and therefore yields good results."}, {"section_index": "8", "section_name": "4.3 LARGE DATASETS", "section_text": "We also tested oSLRAU on larger datasets to evaluate its scaling properties. Table2 shows the number of attributes and data points in each dataset. Table 3|compares the average log-likelihood of oSLRAU to that of randomly generated networks (which are the state of the art for obtain a valid continuous SPNs) for those large datasets. For a fair comparison we generated random networks that are at least as large as the networks obtained by oSLRAU. oSLRAU achieves higher log-likelihood than random networks since it effectively discovers empirical correlations and generates a structure that captures those correlations.\nWe also compare oSLRAU to a publicly available implementation of RealNVF2l Since the bench. marks include a variety of problems from different domains and it is not clear what network ar chitecture would work best, we used a default 2-hidden-layer fully connected network. The twc.\nThe experimental results for our algorithm called online structure learning with running average. update (oSLRAU) are listed in Table 1along with results reproduced from Jaini et al.(2016) The table reports the average test log likelihoods with standard error on 10-fold cross validation. oSLRAU achieved better log likelihoods than online Bayesian moment matching (oBMM) (Jaini et al.] 2016) and online expectation maximization (oEM) (Cappe & Moulines]2009) with network structures generated at random or corresponding to Gaussian mixture models (GMMs). This high- lights the main advantage of oSLRAU: learning a structure that models the data. Stacked Restricted Boltzmann Machines (SRBMs) (Salakhutdinov & Hinton]2009) and Generative Moment Matching Networks (GenMMNs) (Li et al.2015) are other types of deep generative models. Since it is not. possible to compute the likelihood of data points with GenMMNs, the model is augmented with Parzen windows. More specifically, 10,o00 samples are generated using the resulting GenMMNs. and a Gaussian kernel is estimated for each sample by adjusting its parameters to maximize the likelihood of a validation set. However, as pointed out by Theis et al.[(2015) this method only pro- vides an approximate estimate of the log-likelihood and therefore the log-likelihood reported for. GenMMNs in Table 1may not be directly comparable to the log-likelihood of other models.\nTable 2: Information for each large dataset\nTable 3: Average log-likelihood scores with standard error on large real-world data sets. The bes1 results among the online techniques (random, oSLRAU and RealNVP online) are highlighted in bold. Results for RealNVP offline are also included for comparison purposes..\nTable4|reports the training time (seconds) and the size (# of nodes) of the resulting SPNs for eacl. dataset when running oSLRAU and a variant that stops structure learning early. The experiment were carried out on an Amazon c4.xlarge machine with 4 vCPUs (high frequency Intel Xeon E5 2666 v3 Haswell processors) and 7.5 Gb of RAM. The times are relatively short since oSLRAU is ai online algorithm and therefore does a single pass through the data. Since it gradually constructs th structure of the SPN as it processes the data, we can also stop the updates to the structure early (whil still updating the parameters). This helps to mitigate overfitting while producing much smaller SPN. and reducing the running time. In the columns labeled \"early stop\"' we report the results achieve when structure learning is stopped after processing one ninth of the data. The resulting SPNs are. significantly smaller, while achieving a log-likelihood that is close to that of oSLRAU without earl stopping.\nThe size of the resulting SPNs and their log-likelihood also depend on the correlation threshold used. to determine when the structure should be updated to account for a detected correlation, and the maximum size of a leaf node used to determine when to branch off into a new subtree.\nDataset Datapoints Variables Voxforge 3,603,643 39 Power 2,049,280 4 Network 434,873 3 GasSen 8,386,765 16 MSD 515,344 90 GasSenH 928,991 10\nDatasets Random oSLRAU RealNVP Online RealNVP Offline Voxforge -33.9 0.3 -29.6 0.0 -169.0 0.6 -168.2 0.8 Power -2.83 0.13 -2.46 0.11 -18.70 0.19 -17.85 0.22 Network. -5.34 0.03 -4.27 0.04 -10.80 0.02 -7.89 0.05 GasSen -1142 -102 4 -748 99 -443 64 MSD -538.8 0.7 -531.4 0.3 -362.4 0.4 -257.1 2.03 GasSenH -21.5 1.3 -15.6 1.2 -44.5 0.1 44.2 0.1\nayers have the same size. For a fair comparison, we used a number of nodes per layer that yields. approximately the same number of parameters as the sum product networks. Training was done by. stochastic gradient descent in TensorFlow with a step size of O.01 and mini-batch sizes that vary. from 100 to 1500 depending on the size of the dataset. We report the results for online learning. single iteration) and offline learning (validation loss stops decreasing). In this experiment, the cor-. relation threshold was kept constant at O.1. To determine the maximum number of variables in multivariate leaves, we followed the following rule: at most one variable per leaf if the problem has. 3 features or less and then increase the maximum number of variables per leaf up to 4 depending on. the number of features. Further analysis on the effects of varying the maximum number of variables. per leaf are available below. We do this to balance the size and the expressiveness of the resulting. SPN. oSLRAU outperformed RealNVP on 5 of the 6 datasets. This can be explained by the fact. that oSLRAU learns a structure that is suitable for each problem while RealNVP does not learn any. structure. Note that it should be possible for RealNVP to obtain better results by using a better ar-. chitecture than a default 2-hidden-layer network, however in the absence of domain knowledge this. is difficult. Furthermore, in online learning with streaming data, it is not possible to do an offline. search over some hyperparameters such as the number of layers and nodes in order to fine tune the. architecture. Hence, the results presented in Table|3|highlight the importance of an online structure. learning technique such as oSLRAU to obtain a suitable network structure with streaming data in. the absence of domain knowledge\nTable 4: Large datasets: comparison of oSLRAU with and without early stopping (i.e., no structure learning after one ninth of the data is processed, but still updating the parameters)..\nTable 5: Log likelihoods with standard error as we vary the threshold for the maximum # of variables in a multivariate Gaussian leaf. No results are reported (dashes) when the maximum # of variables is greater than the total number of variables.\nTable 6: Average times (seconds) as we vary the threshold for the maximum # of variables in a multivariate Gaussian leaf. No results are reported (dashes) when the maximum # of variables is. greater than the total number of variables..\nTable 7: Average SPN sizes (# of nodes) as we vary the threshold for the maximum # of variables in a multivariate Gaussian leaf. No results are reported (dashes) when the maximum # of variables is greater than the total number of variables\nMaximum # of Variables per Leaf Node Dataset 1 2 3 4 5 Power 14269 2813 427 8 Network 7214 1033 7 GasSen 13874 6879 5057 772 738 MSD 6547 3114 802 672 582 GasSenH 1901 1203 920 798 664\nlog-likelihood time (sec) SPN size (# nodes) Dataset oSLRAU early stop oSLRAU early stop oSLRAU early stop Power -2.46 0.11 -0.24 0.20 183 70 23360 1154 Network -4.27 0.02 -4.30 0.02 14 4 7214 249 GasSen -102 4 -111 3 351 188 5057 564 MSD -531.4 0.3 -534.9 0.3 44 26 672 238 GasSenH -15.6 1.2 -18.6 1.0 12 9 920 131\nMaximum # of Variables per Leaf Node Dataset 1 2 3 4 5 Power 133 41.5 13.8 9.9 Network 14.1 4.01 1.92 GasSen 783.78 450.34 350.52 148.89 145.759 MSD 80.47 64.44 44.9 43.65 41.44 GasSenH 16.59 13.35 11.76 11.04 10.16\nTable 8: Log Likelihoods for different correlation thresholds\nTable 9: Average times (seconds) as we vary the correlation threshold\nTo understand the impact that the maximum number of variables per leaf node has on the resultin SPN, we performed experiments where the minibatch size and correlation threshold were held con stant for a given dataset while the maximum number of variables per leaf node varies. We report th log likelihood with standard error after ten-fold cross validation, as well as average size and averag time in Tables[5l6and7] As expected, the number of nodes in an SPN decreases as the leaf nod cap increases, since there will be less branching. What's interesting is that depending on the type o correlations in the datasets, different sizes perform better or worse. For example in Power, we notic that univariate leaf nodes are the best, but in GasSenH, slightly larger leaf nodes tend to do well We show that too many variables in a leaf node leads to worse performance and underfitting, and ii some cases too few variables per leaf node leads to overfitting. These results show that in genera the largest decrease in size and time while maintaining good performance occurs with a maximun of 3 variables per leaf node. Therefore in practice, 3 variables per leaf node works well, except whe there are only a few variables in the dataset, then 1 is a good choice\nTables[8] 9|and 10|show respectively how the log-likelihood, time and size changes as we vary the correlation threshold from O.05 to 0.7. A very small correlation threshold tends to detect spurious correlations and lead to overfitting while a large correlation threshold tends to miss some correla tions and lead to underfitting. The results in Table[8|generally support this tendency subject to noise due to sample effects. Since the highest log-likelihood was achieved in three of the datasets with a correlation threshold of 0.1, this explains why we used O.1 as the threshold in the previous exper- iments. Tables [9|and 10|also show that the average time and size of the resulting SPNs generally decrease (subject to noise) as the correlation threshold increases since fewer correlations tend to be detected.\nTable 10: Average SPN sizes (# of nodes) as the correlation threshold changes\nCorrelation Threshold Dataset 0.05 0.1 0.2 0.3 0.5 0.7 Power -2.37 0.13 -2.46 0.11 -2.20 0.18 -3.02 0.24 -4.65 0.11 -4.68 0.09 Network -3.98 0.09 -4.27 0.02 -4.75 0.02 -4.75 0.02 -4.75 0.02 -4.75 0.02 GasSen -104 5 -102 4 -102 3 -102 3 -103 3 -1103 MSD -531.4 0.3 -531.4 0.3 -531.4 0.3 -531.4 0.3 -532.0 0.3 -536.2 0.1 GasSenH -15.6 1.2 -15.6 1.2 -15.8 1.1 -16.2 1.4 -16.1 1.4 -17.2 1.4\nCorrelation Threshold Dataset 0.05 0.1 0.2 0.3 0.5 0.7 Power 197 183 130 39 10 9 Network 20 14 1.9 1.9 1.9 1.9 GasSen 370 351 349 366 423 142 MSD 44.3 43.7 44.3 44.0 43.0 30.3 GasSenH 11.8 11.7 11.9 13.0 12.0 15.1\nCorrelation Threshold Dataset 0.05 0.1 0.2 0.3 0.5 0.7 Power 24914 23360 16006 2813 11 11 Network 11233 7214 9 9 9 9 GasSen 5315 5057 5041 5035 4581 490 MSD 672 672 674 674 660 448 GasSenH 920 920 887 877 1275 796\nThis paper describes a first online structure learning technique for Gaussian SPNs that does a single. pass through the data. This allowed us to learn the structure of Gaussian SPNs in domains for whicl the state of the art was previously to generate a random network structure. This algorithm can alsc. scale to large datasets efficiently."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Tameem Adel, David Balduzzi, and Ali Ghodsi. Learning the structure of sum-product networks via an svd-based algorithm. In UAI. 2015\nRobert Gens and Pedro Domingos. Discriminative learning of sum-product networks. In NIPS, pp 3248-3256, 2012\nRobert Peharz. Foundations of Sum-Product Networks for Probabilistic Modeling. PhD thesis Medical University of Graz, 2015\nHoifung Poon and Pedro Domingos. Sum-product networks: A new deep architecture. In UAI, pp 2551-2558, 2011.\nIn the future, this work could be extended in several directions. We are investigating the combi nation of our structure learning technique with other parameter learning methods. Currently, we are simply learning the parameters by keeping running statistics for the weights, mean vectors, and covariance matrices. It might be possible to improve the performance by using more sophisticated parameter learning algorithms. We would also like to extend the structure learning algorithm to dis- crete variables. Finally, we would like to look into ways to automatically control the complexity of the networks. For example, it would be useful to add a regularization mechanism to avoid possible Overfitting.\nPrashant Doshi George Trimponias Mazen Melibari, Pascal Poupart. Dynamic sum-product net- works for tractable inference on sequence data. In JMLR Conference and Workshop Proceedings - International Conference on Probabilistic Graphical Models (PGM), 2016.\nAbdullah Rashwan. Han Zhao, and Pascal Poupart. Online and Distributed Bayesian Momen Matching for Sum-Product Networks. In A1STATS, 2016\nAmirmohammad Rooshenas and Daniel Lowd. Learning sum-product networks with direct an indirect variable interactions. In ICML, pp. 710-718, 2014.\nDan Roth. On the hardness of approximate reasoning. Artificial Intelligence, 82(1):273-302, 1996\nLucas Theis, Aaron Oord, and Matthias Bethge. A note on the evaluation of generative models arXiv:1511.01844, 2015.\nAntonio Vergari, Nicola Di Mauro, and Floriana Esposito. Simplifying, regularizing and strength ening sum-product network structure learning. In ECML-PKDD, pp. 343-358. 2015.\nHan Zhao and Pascal Poupart. A unified approach for learning the parameters of sum-produci networks. arXiv:1601.00318, 2016.\nHan Zhao, Mazen Melibari, and Pascal Poupart. On the relationship between sum-product networks and Bayesian networks. In ICML, 2015.\nHan Zhao, Tameem Adel, Geoff Gordon, and Brandon Amos. Collapsed variational inference fo sum-product networks. In ICML, 2016\nRuslan Salakhutdinov and Geoffrey E Hinton. Deep boltzmann machines. In AISTATS, pp. 448-455,"}] |
HyCRyS9gx | [{"section_index": "0", "section_name": "FAST ADAPTATION IN GENERATIVE MODELS SWIT] GENERATIVE MATCHING NETWORKS", "section_text": "Sergey Bartunoy' & Dmitry P. Vetrov\nNational Research University Higher School of Economics (HSE) Moscow, Russia\nMoscow, Russia"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Despite recent advances, the remaining bottlenecks in deep generative models. are necessity of extensive training and difficulties with generalization from smal number of training examples. Both problems may be addressed by conditiona generative models that are trained to adapt the generative distribution to addi. tional input data. So far this idea was explored only under certain limitations sucl as restricting the input data to be a single object or multiple objects represent ing the same concept. In this work we develop a new class of deep generative. model called generative matching networks which is inspired by the recently pro. posed matching networks for one-shot learning in discriminative tasks and the. ideas from meta-learning. By conditioning on the additional input dataset, gener. ative matching networks may instantly learn new concepts that were not available. during the training but conform to a similar generative process, without explici. limitations on the number of additional input objects or the number of concept. they represent. Our experiments on the Omniglot dataset demonstrate that gener ative matching networks can significantly improve predictive performance on th fly as more additional data is available to the model and also adapt the latent space which is beneficial in the context of feature extraction.."}, {"section_index": "2", "section_name": "1 INTRODUCTION", "section_text": "Deep generative models are currently one of the most promising directions in generative modelling In this class of models the generative process is defined by a composition of conditional distribu tions modelled using deep neural networks which form a hierarchy of latent and observed variables This approach allows to build models with complex, non-linear dependencies between variables and efficiently learn the variability across training examples.\nSuch models are trained by stochastic gradient methods which can handle large datasets and a wid. variety of model architectures but also present certain limitations. The training process usually con. sists of small, incremental updates of networks' parameters and requires many passes over training data. Notably, once a model is trained it cannot be adapted to newly available data without complete re-training to avoid catastrophic interference (McCloskey & Cohen]1989] Ratcliff1990). There is also a risk of overfitting for concepts that are not represented by enough training examples which is. caused by high capacity of the models. Hence, most of deep generative models are not well-suitec. for rapid learning in one-shot scenario which is often encountered in real-world applications where. data acquisition is expensive or fast adaptation to new data is required..\nA potential solution to these problems is explicit learning of adaptation mechanisms complementing the shared generative process. In probabilistic modelling framework, adaptation may be expressed as conditioning the model on additional input examples serving as induction bias. Notable steps in this direction have been made by Rezende et al.[(2016) whose model was able to condition on a single object to produce new examples of the concept it represents. Later, Edwards & Storkey (2016) proposed a model that maintained a global latent variable capturing statistics about multiple input objects which was used to condition the generative distribution. It allowed to implement the\nIn this work we present Generative Matching Networks, a new family of conditional generative models capable of instant adaptation to new concepts that were not available at the training time but share the structure of underlying generative process with the training examples. By conditioning on. additional inputs, Generative Matching Networks improve their predictive performance, the quality. of generated samples and also adapt their latent space which may be useful for unsupervised feature extraction. Importantly, no explicit limitations on the conditioning data are imposed such as number. of objects or number of different concepts which expands the applicability of one-shot generative. modelling and distinguish our work from existing approaches. Our model is inspired by the atten-. tional mechanism implemented in Matching Networks (Vinyals et al.]2016) previously proposed. for discriminative tasks and the recent advances from meta-learning (Santoro et al.|2016). Our ap-. proach for adaptation is an extension of these ideas to generative modelling and it may be re-used in a variety of different models being not restricted to the particular architecture used in the paper. The source code for generative matching networks is available at|http : / / github . com/ sbos/ gmn.\nThis paper is organized as follows. First, in section 2 we revisit the necessary background in varia. tional approach to training generative models and mention the related work in conditional generative models. Then, in section 3|we describe the proposed generative model, it's recognition counterpari and the training protocol. Section4|contains experimental evaluation of the proposed model as both. generative model and unsupervised feature extractor in small-shot learning settings. We conclude. with discussion of the results in section [5.\nWe consider the problem of learning a probabilistic generative model which can be expressed as a probability distribution p(x|0) over objects of interests x parametrized by 0. The major class o1 generative models introduce also latent variables z that are used to explain or generate an object x such that p(x|0) = p(z|0)p(x z, 0)dz and assumed to be non-observable.\nCurrently, the common practice is to restrict the conditional distributions p(z|0) and p(x|z, 0) tc. tractable distribution families and use deep neural networks for regressing their parameters. The. expressive power of deep non-linear generative models comes at a price since neither marginal distribution p(x|0) can be computed analytically nor it can be directly optimized in a statistically efficient way. Fortunately, intractable maximum likelihood training can be avoided in practice by re sorting to adversarial training (Gutmann & Hyvarinen2012) [Goodfellow et al.2014) or variational inference framework (Kingma & Welling2013] Rezende et al.]2014) which we consider further..\nlogp(x|0) L(0,) = Eq [logp(x,z|0) - logq(z|x,)] = logp(x|0) - KL(q||p(|x,0))\nSimilarly to the generative model, recognition model may also be implemented with the use of deep neural networks or other parameter regression which is known as amortized inference (Gershman & Goodman2014). Amortized inference allows to use a single recognition mode1 for many training examples. Thus, it is convenient to perform training of the generative model p(x[0) by stochastic gradient optimization of variational lower bounds (1) corresponding to independent observations X V\nN N logp(xi|0) Eq [logp(xi,zi|0) - logq(zi|xi,$)]> max. 0,$ i=1 i=1\nThe clear advantage of this approach is its scalability. Every stochastic update to the parameters. computed from only a small portion of training examples has an immediate effect for the whole\nRecent developments in variational inference alleviate problems with maximizing the intractable marginal likelihood log p(x|0) by approximating it with a lower bound (Jordan et al.||1999):\nTightness of the bound is controlled by the recognition model q(z[x, $) which aims to minimize Kullback-Leibler divergence from the true posterior p(z|x, 0)\ndataset. However, while a single parameter update may be relatively fast a large number of them required to significantly improve generative or inferential performance of the model.\nHence, gradient training of generative models usually results into an extensive computational pro cess which prevents from rapid incremental learning. In the next section we discuss potential solu tions to this problem that allow to implement fast learning ability in generative models.\nIn probabilistic modelling framework the natural way of incorporating knowledge about newly avail able data is conditioning. One may design a model that being conditioned on the additional input data X = x1, X2, ..., XT represents a new generative distribution p(x|X, 0).\nAn implementation of this idea can be found in the model byRezende et al.(2016). Besides many. other attractive novelties such as using sophisticated attention and feedback components, the mode. was able to produce new examples of a concept that was missing at the training time but had simi larities in the underlying generative process with the other training examples. The model supported. an explicit conditioning on a single observation x' representing the new concept to construct a new. generative distribution of the form p(x|x', 0)..\nThe explicit conditioning when adaptation is performed by the model itself and and has to be learne is not the only way to propagate knowledge about new data. Another solution which is often en countered in Bayesian models is to maintain a global latent variable a encoding information abou the whole available dataset such that the individual observations are conditionally independent giver it's value. The model then would have the following form:\np(X|0) = f p(a|0) II=1p(xt|,0)da\np(x[X,0) = p(xa.0)p(aX.0)da\nThere are several relevant examples of generative models with global latent variables used for model. adaptation and one-shot learning. Salakhutdinov et al.[(2013) combined deep Boltzmann machine (DBM) with nested Dirichlet process (nDP) in a Hierarchical-Deep (HD) model. While being a. compelling demonstration of important ideas from Bayesian nonparametrics and deep learning, the HD model required an extensive Markov chain Monte Carlo inference procedure used both for. training and adaptation. Thus, while Bayesian learning approach could prevent overfitting the fast. learning ability still presents a challenge for sampling-based inference..\nLater, Lake et al.(2015) proposed Bayesian program learning (BPL) approach for building a gener. ative model of handwritten characters. The model was defined as a probabilistic program containec fine-grained specification of prior knowledge of the task such as generation of strokes and thei. composition into characters mimicking human drawing strategies. Authors used an extensive pos. terior inference as the training procedure and the conditioning mechanism (3) for generating nev examples. The model was shown to efficiently learn from small number of training examples, bu similarly to the HD model, sophisticated and computationally expensive inference procedure makes. fast adaptation in BPL generally hard to achieve..\nThe recently proposed neural statistician model (Edwards & Storkey2016) is an example of deep. generative model with a global latent variable (2). The model was trained by optimizing a variational. lower bound following the approach described in section 2.1 but with an additional recognition model approximating posterior distribution over the global latent variable. Authors designed the. recognition model to be computationally efficient and require only a single pass over data which consisted of extracting special features from the examples, applying to them a pooling operation. (e.g. averaging) and passing the result to another network providing parameters of the variational. approximation. This simple architecture allowed for the fast learning and guaranteed invariance. to both data permutations and size of the conditioning dataset. However, experimentally the fast\nThe principal existence of such a global variable may be justified by the de Finetti's theorem (Dia-. conis & Freedman! [1980) under the exchangeability assumption. In the model (2), the conditional generative distribution p(x|X, 0) is then defined implicitly via posterior over the global variable:.\np(x|X,0) = ] p(x|a,0)p(aX,0)da\nOnce there is an efficient inference procedure for the global variable a, fast adaptation of the gener ative model can be implemented straightforwardly..\nlearning ability in the model was evaluated only in the setting where all of the training examples represented the same single concept.\nWe argue that in order to capture more information about the conditioning data such as a number of. different concepts a more sophisticated aggregation procedure must be employed. Moreover, a fixed. parametric description is too restrictive for an accurate representation of datasets of varying size. This motivates us to combine the best of two worlds: nonparametric representation of data and fast. inference with neural recognition models. We proceed with a description of the proposed model.\nGenerative matching networks aim to model conditional generative distributions of the form p(x[X, 0). Similarly to other deep generative models we introduce a local latent variable z. Thus. the full joint distribution of our model can be expressed as:.\np(x,z|X,0) = p(zX,0)p(xz,X,0)\nIn order to design a fast adaptation mechanism we have to make certain assumptions about rela. tionships between training data and the new data used to condition the model. Thus we assume the homogeneity of generative processes for training and conditioning data up to some parametrization.. One may think of this parametrization as specifying weights of a neural network defining a genera-. tive model. The generative process is assumed to have an approximately linear dependence on the. parameters such that interpolation between parameters corresponding to different examples of the. same concept can serve as good parameters for generating other examples. A similar assumption is. used e.g. in the neural statistician mode1 (Edwards & Storkey2016)..\nHowever, even if a single concept can be well embedded to a fixed parameter space, this does not imply that a diverse set of concepts will fit into the same parametrization. Hence we express the dependency on the conditioning data in a different way. Instead of embedding the whole condi- tioning dataset we use a special matching procedure that extracts relevant observations from X and interpolates between their descriptions allowing to generate and recognize similar observations."}, {"section_index": "3", "section_name": "3.1 BASIC MODEL", "section_text": "In the basic model, the prior over latent variables p(z) is independent from conditioning data X e.g. a standard normal distribution. In order to generate a new object, a sample from the prior z and conditioning objects X = x1, X2,...,XT are mapped into the matching space where they are compared using a similarity function sim(., .) to form an attention kernel a(z, x). After that the conditioning objects are interpolated in the prototype space I weighted according to the at- tention kernel. The resulting interpolation is then used to parametrize the generative process that corresponds to the sampled value of latent variable.\nFormally. the described matching.. procedure can be described by the following equations.\nT exp(sim(fL(z),gL(xt))) r=a(z,xt)WL(xt), '=1 exp(sim(fL(z),gL(xt))) t=1\nAfter the vector r is computed, it is used as an input to a decoder, e.g. a deconvolutional network\nFunctions f1 and g1 are used to map latent variables and conditioning objects, correspondingly, intc the matching space . Since is supposed to be a feature space that is good for discriminating be tween objects, g1 can be implemented as a feature extractor suitable for the domain of observations a convolutional network in our case. We found it sufficient to implement the function f1 as a simple affine transformation followed by a non-linearity, because the latent variable itself is assumed to be an abstract object description. We also used a simple dot product as a similarity function between these vectors.\nFunction w1, can also be considered as a feature extractor, although since the features useful to spec. ify the generative process are not necessarily good for discrimination, it makes sense to represent\np(x,zX,0) = p(z|X,0)p(xz,X,0)\npseudo-input 9R pseudo-input Generative model Recognition model\nFigure 1: Structure of a basic generative matching network, see equation (5) in section3.1|for the description of functions f, q and . Subscripts L and R denote conditional likelihood and recognition model correspondingly.\nWe have described the basic matching procedure on the example of the conditional likelihood p(x[z, X, 0). Although the procedure (5) is invoked in several parts of the model, each part may operate with it's own implementation of the functions, hence the subscript :L used for the functions f, g and is for likelihood part and below we use :R to denote the recognition part.\nThe recognition model q(z|X, x) uses the matching procedure (5) with the difference that the condi-. tioning objects are being matched not with a value of latent variable, but rather with an observation x. The feature extractor fR in this case can share most of the parameters with gR and in our im-. plementation these functions were identical for matching in the recognition model, i.e. gR = fR. Moreover, since g1 is also used to project observations into the space , we further re-use already. defined functionality by setting gR = gL. We also shared prototype functions for all parts of our. model although this is not technically required..\nAfter the matching, interpolated prototype vector r is used to compute parameters of the approx imate posterior which in our case was a normal distribution with diagonal covariance matrix, i.e q(zX,x, $) = N(z(r), (r)).\nA major difference between the generative matching networks and the originally proposed discrim. inative matching networks (Vinyals et al.12016) is that since no label information is available to the model, the interpolation in equation (5) is performed not in the label space but rather in the prototype. space which itself is defined by the model and is learnt during the training..\nOne can note that the described conditional model is not applicable in a situation where no con- ditioning objects are available. A possible solution to this problem involves implicit addition of a pseudo-input to the set of conditioning objects X. A pseudo-input is not an actual observation but rather just the corresponding outputs of functions f, g and which are assumed to be another trainable parameters\nA stochastic computational graph describing the basic model with pseudo-input can be found on figure [1 Further by default we assume the presence of a single pseudo-input in the model and denote models without pseudo-input as conditional."}, {"section_index": "4", "section_name": "3.2 EXTENSIONS", "section_text": "Although the basic model is capable of instant adaptation to the conditioning dataset X, it admits a number of extensions that can seriously improve it's performance\nThe disadvantage of the basic matching procedure (5) is that conditioning observations X are em. bedded to the space independently from each other. Similarly to discriminative matching network we address this problem by computing full contextual embeddings (FCE) (Vinyals et al.2015). I1 order to obtain a joint embedding of conditioning data we allow K attentional passes over X of th. form (5), guided by a recurrent controller R which accumulates global knowledge about the condi. tioning data in its hidden state h. The hidden state is thus passed to feature extractors f and g tc. obtain context-dependent embeddings.\nY1 and g1 differently. However, in our implementation r was implemented as a convolutional. network sharing most of the parameters with g1 to keep the number of trainable parameters small.\nT exp(sim(f(z, hk), g(xt, hz)) a(z,xt)4(xt), rk = a(Z,Xt ,-1 exp(sim(f(z, hk),g(xt', h)) t=1 hk+1 = R(hk,rk)\nThe output of the full matching procedure is thus the interpolated prototype vector from the las iteration r K and the last hidden state of hK+1.\nBesides context-dependent embedding of the conditioning data, full matching procedure allows to implement the data-dependent prior over latent variables p(z|X). In this case, no query point such as a latent variable z or an observation x is used to match with the conditioning data and only hidden state of the controller h is passed to functions f and g. Output of the procedure is then used to compute parameters of the prior, i.e. means and standard deviations in our case.\nAs we discuss in the experiments section, we found these extensions so important that further we consider only the model with full matching described by equation (6) and data-dependent prior Please refer to the appendix and the source code for architectural details of our implementation.\nT X<t={xs}1 p(X|0) =]p(xt|X<t,0), t=1\nEpa(x) [p(X|0)]> maxe.\nObviously, the structure of task-generating distribution has a large impact on training and using. an arbitrary distribution will unlikely lead to good results. Hence, we assume that at the training. time we have an access to label information and can distinguish different concepts or classes. We thus constrain pa(X) to generate datasets consisting of examples that represent up to C randomly selected classes so that even on short datasets the model has a clear incentive to re-use conditioning data. This may be considered as a form of weak supervision but we want to emphasize that one does not need the label information at test time unless the model is deliberately used for classification which is also possible.\nSince the marginal likelihood (7) as well as the conditional marginal likelihoods are intractable we instead use variational lower bound (see section[2.1) as a proxy to p(X|0) in the objective (8):"}, {"section_index": "5", "section_name": "4 EXPERIMENTS", "section_text": "For our experiments we use the Omniglot dataset (Lake et al.]2015) which consists of 1623 classe. of handwritten characters from 50 different alphabets. The first 30 alphabets are devoted for training. and the remaining 20 alphabets are left for testing. Importantly, only 20 examples of each class ar. available which makes this dataset specifically useful for small-shot learning problems. Unfortu. nately, the literature is inconsistent in usage of the dataset and multiple versions of Omniglot wer used for evaluation which differ by train/test split, resolution, binarization and augmentation, se. e.g. (Burda et al.]2015 Rezende et al.2016} Santoro et al.2016).\nWe use the canonical split provided byLake et al.(2015). In order to speed-up training we down scaled images to 28 28 resolution and since the result was fully binary we did not apply any further\nTraining of our model consists of maximizing marginal likelihood of a dataset X which can be expressed as:\nIdeally we would like to use the whole available training data as X but due to computational lim itations we instead use a training strategy rooted in curriculum learning (Bengio et al.|2o09) and meta-learning (Thrun[1998[Vilalta & Drissi2002f[Hochreiter et al.2001) which recently was suc- cessfully applied for one-shot discriminative learning (Santoro et al.|[2016). In particular, we define a task-generating distribution pa(X) which in our case samples datasets X of size T from training examples. Then we train our model to explain well all of the sampled datasets simultaneously:\nL(X,0, $) = t=1Eq(zt|xt,X<t,)[logp(Xt,Zt|X<t,0) logq(zt|Xt,X<t,$)]\n-74 86 standard normal. 76 steps=1, prior steps=1 punnq domen 78 steps=1, prior steps=2 80 steps=1, prior steps=4 82 84 steps=2, prior steps=1 steps=2, prior steps=2 86 steps=2, prior steps=4 88 -90 70 steps=4, prior steps=1 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 steps=4, prior steps=4 # of shots. # of shots\nFigure 2: Lower bound estimates (left) and entropy of prior (right) for various numbers of attention steps and numbers of conditioning examples. Numbers are reported for the training part of Omniglot\npre-processing. We also did not augment our data in contrast to (Santoro et al.] 2016]Edwards & Storkey2016) to make future comparisons with our results easier.\nUnless otherwise stated, we train models on datasets of length T = 20 and of up to C = 2 differer classes as we did not observe any improvement from training with larger values of C."}, {"section_index": "6", "section_name": "4.1 NUMBER OF ATTENTION STEPS", "section_text": "Since the full context matching procedure (6) described in section [3.2|consists of multiple attention steps, it is interesting to see the effect of these numbers on model's performance. We trained several models with smaller architecture and T = 10 varying number of attention steps allowed for the likelihood and recognition shared controller and the prior controller respectively. The models were compared using exponential moving averages of lower bounds corresponding to different numbers of conditioning examples X<t obtained during the training. Results of the comparison can be found on figure2\nInterestingly, larger numbers of steps lead to better results, however lower bounds are almost no. improving after the shared controller is allowed for 4 steps. This behaviour was not observed with discriminative matching networks perhaps confirming the difficulty of unsupervised learning. An other important result is that the standard Gaussian prior makes adaptation significantly harder foi the model yet still possible which justifies the importance of adaptation not just for the likelihood model but also for the prior.\nIn this section we compare generative matching networks with a set of baselines by expected condi tional likelihoods Epa(x)p(xt|X<t). The conditional likelihoods were estimated using importance sampling with 1000 samples from the recognition model used as a proposal.\nWe found it hard to properly compute conditional likelihoods for the neural statistician model (3) and. hence had to exclude this model from the comparison, please see appendix for the details. Instead.. we consider a simple generative matching network denoted as avg in which the matching procedure is replaced with prototype averaging which makes the adaptation mechanism similar to the one used. in neural statistician. We also omitted sequential generative models (Rezende et al.. 2016) from the\nOne may also see that all models preferred to set higher variances for a prior resulting to higher. entropy comparing to standard normal prior. Clearly as more examples are available, generative. matching networks become more certain about the data and output less dispersed Gaussians.\nBased on this comparison we decided to proceed with models that have 4 steps for the shared con troller and a single step for the prior controller which is a reasonable compromise between compu-. tational cost and performance.\nAs we mention in section 3.1] it is possible to add a pseudo-input to the model to make it applicable for cases when no conditioning data is available. In this comparison by default we assume that a single pseudo-input was added to the model, otherwise we denote a model with no pseudo-input as conditional. When training and evaluating conditional models we ensure that the first C objects in a dataset belong to different classes so that they in principle contain enough information to explain rest of the dataset.\nTable 1: Conditional negative log-likelihoods for the test part of Omniglot\ncomparison as they were reported to overfit on the canonical train/test split of Omniglot. Anothe. baseline we use is a standard variational autoencoder which has the same architecture for generative. and recognition model as the full generative matching networks.\nTable contains results of the evaluation on the test alphabets from Omniglot. Ctrain and Ctest denote the maximum number of classes in task-generating distributions pa() used for training and evaluating respectively.\nAs one could expect, larger values of Ctest make adaptation harder since on average less examples o1 the same class are available to the model. Still generative matching networks are capable of working in low-data regime even when testing setting is harder than one used for training, i.e. Ctest > Ctrain Unsurprisingly, adaptation by averaging over prototype features performed reasonably well for sim ple datasets constructed of a single class, although significantly worse than the proposed matching procedure. On more difficult datasets with mixed examples of two different classes (Ctest = 2) averaging was ineffective for expressing dependency on the conditioning data which justifies our argument on the necessity of nonparametric representations.\nIn order to visually assess the fast adaptation ability of generative matching networks we also provide conditionally generated samples in figure 3] Interestingly, the conditional version of our model which does not use a pseudo-input both at training and testing time generated samples slightly more similar to the conditioning data while sacrificing the predictive performance. Therefore, presence or absence of the pseudo-input should depend on target application of the model, i.e. density estimation or producing new examples."}, {"section_index": "7", "section_name": "5 CONCLUSION", "section_text": "In this paper we presented a new class of conditional deep generative models called generative. matching networks. These models are capable of fast adaptation to conditioning dataset by adjusting. both the latent space and the predictive density while making very few assumptions on the data. The nonparametric matching enabling these features can be seen as a generalization of the origina matching procedure since it allows a model to define the label space itself extending the applicabilit. of matching networks to unsupervised and perhaps semi-supervised settings. We believe that these. ideas can evolve further and help to implement more data-efficient models in other domains such as. reinforcement learning where data acquisition is especially hard.."}, {"section_index": "8", "section_name": "ACKNOWLEDGMENTS", "section_text": "We would like to thank Michael Figurnov and Timothy Lillicrap for useful discussions. Dmitry P Vetrov is supported by RFBR project No.15-31-20596 (mol-a-ved) and by Microsoft: MSU joint research center (RPD 1053945)\nNumber of conditioning examples Model Ctest 0 1 2 3 4 5 10 19 GMN, Ctrain = 2 1 89.7 83.3 78.9 75.7 72.9 70.1 59.9 45.8 GMN, Ctrain = 2 2 89.4 86.4 84.9 82.4 81.0 78.8 71.4 61.2 GMN, Ctrain = 2 3 89.6 88.1 86.0 85.0 84.1 82.0 76.3 69.4 GMN, Ctrain = 2 4 89.3 88.3 87.3 86.7 85.4 84.0 80.2 73.7 GMN, Ctrain = 2, conditional 1 93.5 82.2 78.6 76.8 75.0 69.7 64.3 GMN, Ctrain = 2, conditional 2 86.1 83.7 82.8 81.0 76.5 71.4 GMN, Ctrain = 2, conditional 3 86.1 84.7 83.8 79.7 75.3 GMN, Ctrain = 2, conditional 4 86.8 85.7 82.5 78.0 VAE 89.1 GMN, Ctrain = 1, avg 1 92.4 84.5 82.3 81.4 81.1 80.4 79.8 79.7 GMN, Ctrain = 2, avg 2 88.2 86.6 86.4 85.7 85.3 84.5 83.7 83.4 GMN, Ctrain = 1, avg, conditional. 1 88.0 84.1 82.9 82.4 81.7 80.9 80.7 GMN, Ctrain = 2, avg, conditional 2 85.7 85.0 85.3 84.6 84.5 83.7\nA 43 H M P 4 1 A AP E J K F F H D H U L P b H C . (a) Full matching (b) Full matching, conditional (c) Average matching, conditional\nFigure 3: Conditionally generated samples. First column contains conditioning data in the order i is revealed to the model. Row number t (counting from zero) consists of samples conditioned or first t input examples"}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozai. Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Info mation Processing Systems, pp. 2672-2680, 2014.\nSepp Hochreiter, A Steven Younger, and Peter R Conwell. Learning to learn using gradient descent In International Conference on Artificial Neural Networks, pp. 87-94. Springer, 2001.\nMichael I Jordan. Zoubin Ghahramani, Tommi S Jaakkola, and Lawrence K Saul. An introductior to variational methods for graphical models. Machine learning, 37(2):183-233, 1999.\nBrenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learnin, through probabilistic program induction. Science, 350(6266):1332-1338, 2015\nMichael McCloskey and Neal J Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. Psychology of learning and motivation, 24:109-165. 1989\nPersi Diaconis and David Freedman. Finite exchangeable sequences. The Annals of Probability, pp 745-764, 1980.\nDanilo Jimenez Rezende, Shakir Mohamed, Ivo Danihelka, Karol Gregor, and Daan Wierstra. One shot generalization in deep generative models. arXiv preprint arXiv:1603.05106, 2016\nSebastian Thrun. Lifelong learning algorithms. In Learning to learn, pp. 181-209. Springer. 1998\nRicardo Vilalta and Youssef Drissi. A perspective view and survey of meta-learning. Artificial Intelligence Review. 18(2):77-95. 2002"}, {"section_index": "10", "section_name": "CONDITIONAL GENERATOR", "section_text": "h = conv1(x) y = f(conv2(h) + h) + pool(scale(x))\nThe block is parametrized by size of filters used in convolutions conv1 and conv2, shared number of filters F and stride S.\nscale is another convolution with 1 1 filters and the shared stride S In all other convolutions number of filters is the same and equals F conv1 and pool have also stride S. conv2 preserve size of the input by padding and has stride 1.\nBlocks used in our p aper have the following parameters (W1 X H1. W, X H. F. S)\n1. (2 2, 2 2,32, 2) 2. (3 3, 3 3,16,2 3. (4 4,3 3,16, 2\nThen log-probabilities for binary pixels were obtained by summing the result of these convolutions along the channel dimension..\nAdam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. One. shot learning with memory-augmented neural networks. arXiv preprint arXiv:1605.06065, 2016\nThe conditional generator network producing parameters for p(x|z, X, 0) has concatenation of z and the output of the matching operation [r, h] as input which is transformed to 3 3 32 tensor and then passed through 3 residual blocks of transposed convolutions. Each block has the following form:\nTable 2: Conditional negative log-likelihoods for the test part of MNIST. Models were trained on the train part of Omniglot.\nFunction has an architecture which is symmetric from the generator network. The only difference is that the scale scale operation is replaced by bilinear upscaling.\nThe residual blocks for feature encoder has following parameters\n1. (4 4,3 3,16,2 2. (3 3,3 3,16,2 3. (2 2,2 2,32,2\nThe result is a tensor of 3 x 3 x 32 = 288 dimensions\nEach function f or g used in our model is simply an affine transformation of feature encoder's outpu (interpreted as a vector) to a 200-dimensional space followed by parametric rectified non-linearity."}, {"section_index": "11", "section_name": "APPENDIX B. TRANSFER TO MNIST", "section_text": "In this experiment we test the ability of generative matching networks to adapt not just to nev concepts, but also to a new domain. Since we trained our models on 28 28 resolution for Omniglo it should be possible to apply them on MNIST dataset as well. We used the test part of MNIST tc which we applied a single random binarization..\nTable [2|contains estimated predictive likelihood for different models. Qualitative results from the evaluation on Omniglot remain the same. Although transfer to a new domain caused significant drop in performance for all of the models, one may see that generative matching networks still demonstrate the ability to adapt to conditioning data. At the same time, average matching does not seem to efficiently re-use the conditioned data in such transfer task since relative improvements in expected conditional log-likelihood are rather small. Apparently, the model trained on a one-class datasets also learned highly dataset-dependent features as it actually performed even worse than the model with Ctrain = 2.\nWe also provide conditional samples on figure 4. Both visual quality of samples and test log. likelihoods are significantly worse comparing to Omniglot which can be caused by a visual dif- ference of the MNIST digits from Omniglot characters. The images are bolder and less regular due to binarization.Edwards & Storkey(2016) suggest that the quality of transfer may be improved by. augmentation of the training data, however for the sake of experimental simplicity and reproducibil-. ity we resisted from any augmentation..\nNumber con loning examples Model Ctest 0 1 2 3 4 5 10 19 GMN, Ctrain = 2 1 126.7 121.1 118.4 117.6 117.1 117.1 117.1 118.5 GMN, Ctrain = 2 2 126.2 123.1 121.3 120.1 119.4 118.9 118.3 119.6 GMN, Ctrain = 2, conditional 1 135.1 120.9 117.5 115.7 114.4 111.7 109.8 GMN, Ctrain = 2, conditional 2 123.1 121.9 119.4 118.8 115.2 113.2 GMN, Ctrain = 1, avg 1 131.5 126.5 123.3 121.9 121.0 120.2 118.6 117.5 GMN, Ctrain = 2, avg 2 126.2 122.8 121.0 119.9 118.9 118.7 117.8 116.8 GMN, Ctrain = 1, avg, conditional 1 132.1 126.9 125.0 124.8 123.9 121.7 120.9 GMN, Ctrain = 2, avg, conditional 2 118.4 117.9 117.4 117.1 116.6 115.8\nFigure 4: Conditionally generated samples on MNIST. Models were trained on the train part of Omniglot. Format of the figure is similar to fig.3\nTable 3: Small-shot classification accuracy (%) on the test part of Omniglot\nGenerative matching networks are useful not only as adaptive density estimators. For example, one may use a pre-trained model for classification in several ways. Given a small number of labelec examples Xc = {xc,1, Xc,2, ... xc,N} for each class c E {1, 2, ..., C}, it possible to use the proba- bility p(x|Xc) as a relative score to assign class c for a new object x.\nAlternatively, one may use the recognition model q(z[X1,..., Xc) to extract features describing. the new object x and then use a classifier of choice, e.g. the nearest neighbour classifier. We imple. mented this method using cosine similarity on mean parameters of approximate Normal posteriors\nThe results under different number of training examples available are provided in table[3] Surpris ingly, the simpler model with average matching performed slightly better than the full matching model. Perhaps, generative matching networks are very smooth density models and even being. conditioned on a number of same-class example still assign enough probability mass to discrepant. observations. The same conclusion can be made by assessing the generated samples on figure3 which may guide further research on the topic..\nThe neural statistician model falls into the category of models with global latent variables which we describe in section2.2] The conditional likelihood for these models has the form:\np(xX,0) = p(a|X, 0)p(x|a, 0)da\nA A 7 1 CO 6 e F 4 1 A E\n(b) Full matching, conditional (c) Average matching, conditional\n5-way 20-way Model Method 1-shot 5-shot 1-shot 5-shot GMN, Ctrain = 1, conditional likelihood 82.7 97.4 64.3 90.8 GMN, Ctrain = 1, avg, conditional likelihood 90.8 96.7 77.0 91.0 GMN, Ctrain = 1, conditional mean cosine 62.7 80.8 45.1 67.2 GMN, Ctrain = 1, avg, conditional mean cosine 72.0 86.0 50.1 72.6 1-NN, raw pixels cosine 34.8 50.5 15.6 28.2\nThis quantity is hard to compute since it consists of an expectation with respect to the true posterio. over global variable a. Since this distribution is intractable, simple importance sampling can not be used to estimate the likelihood. Thus, we tried the following strategies.\nFirst. we used self-normalizing i npling to directly estimate p(x|X, 0) a imnortance\ns=1 Wsp(x,z(s)|(s),0) p(Q(s), X, Z(s)|0) Ws = q(a(s)|X,$)q(Z(s),z(s)|X,x,Q(s), =1 Ws\nbut observed somewhat contradictory results such as non-monotonic dependency of the estimate on the size of conditioning dataset. The diagnostic of the effective sample size suggested that the recognition model is not well suited as proposal for the task.\nAnother strategy was to se uentialy estimate 9) and then use the equation\np(Xt,X<t|0) p(xt|X<t,0) =\nwhich appeared to as unreliable as the previous strategy"}] |
rJo9n9Feg | [{"section_index": "0", "section_name": "CHESS GAME CONCEPTS EMERGE UNDER WEAK SU PERVISION: A CASE STUDY OF TIC-TAC-TOE", "section_text": "Hao Zhao*& Ming Lu\nDepartment of Electronic Engineering Tsinghua University\nzhao-hl3,lu-ml3}@mails.tsinghua.edu.cn"}, {"section_index": "1", "section_name": "Department of Electronic Engineering Tsinghua University. Beiiing. China.", "section_text": "chinazhangli}@mail.tsinghua.edu.cn\nThis paper explores the possibility of learning chess game concepts under weak supervision with convolutional neural networks, which is a topic that has not been visited to the best of our knowledge. We put this task in three different back- grounds: (1) deep reinforcement learning has shown an amazing capability to learn a mapping from visual inputs to most rewarding actions, without know- ing the concepts of a video game. But how could we confirm that the network understands these concepts or it just does not? (2) cross-modal supervision for visual representation learning has drawn much attention recently. Is this method ology still applicable when it comes to the domain of game concepts and actions? (3) class activation mapping is widely recognized as a visualization technique to help us understand what a network has learnt. Is it possible for it to activate at non-salient regions? With the simplest chess game tic-tac-toe, we report inter- esting results as answers to those three questions mentioned above. All codes, pre-processed datasets and pre-trained models will be released."}, {"section_index": "2", "section_name": "1.1 APPLICATION BACKGROUND", "section_text": "*This work was done when Hao Zhao was an intern at Intel Labs China, supervised by Anbang Yao who is responsible for correspondence\nAnbang Yao & Yurong Chen\nanbang.yao, yurong.chen}@intel.com"}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "Deep reinforcement learning (DRL) has drawn quite much attention since the publication of influ- ential work Mnih et al.(2015). A convolutional neural network (CNN) is used to bridge the gap between video game screen frames and the most rewarding actions. An amazing feature of this kind of systems is that they do not need to know the concepts of these games (e.g. DRL learns to play Breakout without knowing there is a paddle or a ball in Fig 1a). However, how could we confirm that this network really understands these concepts or it just learns a mapping from patterns in the visual inputs to the best actions? This is the first question we are trying to answer here.\nMnih et al.(2015) provides some unsupervised analysis results for visualization, showing that per- ceptually dissimilar frames may produce close rewards, yet this does not answer the question. We choose another visualization technique called class activation mapping as described inZhou et al. (2016), which can reveal where the CNN's attention is. However, directly applying it in tasks like Breakout still cannot answer the question. Imagine one modifies the network described in Mnih et al.[(2015) into another version as Zhou et al. (2016) does. The CNN's attention may be fixed on the ball but it is still not enough to support that the network understands the concept of a ball.\na b d What it What it looks like. sounds like. What it What it looks like. looks like. in depth in RGB What to What will C. do happen DRL learns to play Breakout without. Is the methodology of cross-model supervision. Could the technique of class activation With simplest chess game tic-tac-toe, we. knowing the concepts of a paddle or a ball. applicable for higher-level semantics?. mapping activate at non-salient regions?. provide interesting results as answers..\nFigure 1: We raise three questions from application, methodology and technique perspectives re spectively and provide our answers with a case study of the simplest chess game tic-tac-toe."}, {"section_index": "4", "section_name": "1.2 METHODOLOGY BACKGROUND", "section_text": "There have been some works about representation learning with cross-modal supervision recently Owens et al.[(2016) clusters sound statistics into several categories, and uses them as labels to learn visual representation from images corresponding to these sounds. It quantitatively shows that visual representation learnt in this way is capable of handling challenging computer vision tasks and qual- itatively shows that visual and sound representations are consistent (e.g. babies' faces correspond to baby cry sound samples). Castrejon et al.(2016) goes even further by learning representations across five modalities: RGB images, clip art pictures, sketches, texts and spatial texts. Gupta et al. (2016) learns depth image representation with mid-level features extracted from RGB images as supervision, and reports improved RGB-D object detection performance.\nWhat is the common point among these works? They generate weak supervision from one modalit and use it to learn representation from another (e.g. to learn what a train looks like from what train sounds like or to learn what a chair looks like in depth images from what a chair looks like i RGB images). During training phase, no concepts about a train or a chair are explicitly modeled Although there are many other modalities not visited by this methodology, we think the basic idea behind these works are same: an abstract concept like a train can be observed in different modalitie and different representations can be connected.\nHere comes the question: is this methodology still applicable when it goes beyond the problem o1 learning representations from different observations of a same concept? Albanie & Vedaldi|(2016 is an example, which tries to relate facial expressions with what happened in a TV show (e.g. if a character earns a lot of money, she will be very happy). Although in|Albanie & Vedaldi[(2016) wha happened is explicitly defined, it still can be regarded as a weak supervision for what this expressior LS\nAlthough with the same methodology, the problem studied in this paper addresses even higher se mantics: to learn what to do under the weak supervision of what will happen (Fig[1b). This is sub stantially different from cross-modal supervision works mentioned above because there is no longei a certain abstract concept of object or attribute observed in different modalities. Instead, figuring out the relationship between what to do and what will happen needs a higher level of intelligence.\nWe propose to use a simple chess game called tic-tac-toe for case study. In order to answer the. question, we propose a protocol as this: to place a piece where the CNN's attention is, and examine. whether it is the right move. Of course, the training has to be done under weak supervision, or say,. without telling the network what exactly a right move is. We think if this experiment succeeds we. can claim that the network figures out the concepts of: (1) a chess board grid; (2) the winning rule. (3) two sides. Detailed analysis about these three concepts are provided later..\nThe core technique used in this paper is class activation mapping (CAM) as described inZhou et a. (2016). So leaving out all the backgrounds about playing a chess game or cross-modal supervision. what do our experiments say more than its inventors'? We think we show that CAM can also activat. at non-salient regions. CAM helps us to understand where contributes the most to a classificatiol result. As Fig|1c shows, the heatmap reveals that the face contributes the most to the result that the network claims it as a person.\nSince we render chessboards as visual inputs without adding noise, those empty spaces are com-. pletely empty meaning that: (1) if we take out the activated patch in Fig|1d, all pixels in this patch. have exactly the same value. (2) 1f we evaluate this patch with quantitative information metric like entropy, there is no information here. Thus the only reason why these regions are activated is that. the network collects enough information from these regions' receptive fields. We argue that this ex-. periment (CAM can activate at non-salient regions) testifies (again) CNN's ability to hierarchically. collect information from visual inputs."}, {"section_index": "5", "section_name": "1.4 WHAT THIS PAPER IS ABOUT", "section_text": "After introducing those three backgrounds, we describe our work briefly as: to classify rendere tic-tac-toe chessboards with weak labels and to visualize that the CNN's attention automaticall reveals where the next piece should be placed. Learnt representation shows that: (1) the networl knows some concepts of the game that it is not told of; (2) this level of supervision for representatior learning is possible; (3) the technique of class activation mapping can activate at non-salient regions"}, {"section_index": "6", "section_name": "2.1 CONCEPT LEARNING", "section_text": "Concept learning has different meanings in different contexts, and how to confirm a concept is learnt remains an open question. In|Jia et al.|(2013), a concept is learnt if a generative model is learnt from. a small number of positive samples. In|Lake et al.(2015), a concept is learnt if a model learnt from only one instance can generalize to various tasks.Higgins et al.[(2016) claims a concept is learnt. when a model can predict unseen objects' sizes and positions. To summarize, they evaluate whether a concept is learnt through a model's generalization ability. In even earlier works likeZhu et al. (2010)/Yang et al.(2010), concept learning means a object/attribute classification task dealing with. appearance variations, in which a concept is actually already pre-defined..\nUnlike these works, we investigate the concepts of game rules instead of object/attribute. Unlike. Jia et al.(2013)|Lake et al.(2015);Higgins et al.(2016), we claim a concept is learnt through a novel testing protocol instead of generalization ability. Why generalization ability could show a. concept is learnt? We think the reason is that a model understands a concept if it can use it in more cases. To this end, we argue that our protocol could also show a concept is learnt because the learnt. representations in our experiments can be used to decide what to do though no rule about what need. to be done is provided.\nThe literature of cross-model supervision and the differences between this paper and existing ones are already covered in last section. Here we re-claim it briefly: Owens et al.(2016)Castrejon et al (2016)/Gupta et al.(2016) learn representations across modalities because actually they are different observations of a same (object or attribute) concept. Whether this methodology is applicable for\nAs has already been shown byKrizhevsky et al.[(2012), kernels of lower layers of a CNN capture gradients in an image. Existing CAM experiments tend to activate at salient regions, and this is very reasonable because there are more gradients and therefore more information (e.g. the face in Fig[1). Here comes the question: could CAM activate at non-salient regions like the empty spaces on a chess board? Our answer is positive as the results (Fig|1d) show that in order to predict what will happen in the future, the CNN's attention is fixed upon texture-free regions..\n(2) (3) (4) (5) (8) (9) (10) (11) (12) (13) (14) (15) (16) (17) (18)\nFigure 2: 18 different types of chessboard states and corresponding labels\nhigher-level concepts like game rules remains an open question and we provide positive answers to this question.\nBefore the technique of class activation mapping is introduced byZhou et al.(2016), pioneerin. works like Simonyan et al.(2014)Zhou et al.[(2015) have already shown CNN's ability to localiz objects with image-level labels. Although with different techniques, Simonyan et al.(2014))Zhoi. et al.(2015)'s activation visualization results also focus on salient regions. Unlike these works we show that class activation mapping can activate at non-salient regions, or say more specifically. completely texture-free regions. Since the activated patch itself provides no information, all dis criminative information comes from its context. This is another strong evidence to prove CNN'. capability to collect information from receptive fields, as a hierarchical visual model..\nAfter pruning out illegal states, we collect 4486 possible states in total. Among these samples, we. further take out 1029 states that a certain side is going to win in the next move. We then transform these chessboard states into visual representations (gray-scale images at resolution (180, 180)). Each of these 1029 samples is assigned a label according to the state transfer vectors. There are totally 18 different labels illustrating 2 (sides) 9 (locations). As demonstrated by Fig2l we randomly pick a sample for each label. As mentioned before black side takes the first move, thus if the numbers of\nA tic-tac-toe chessboard is a 3 3 grid, and there are two players (black and white in our case). Due. to duality, we generate all training samples assuming the black side takes the first move. The state. space of tic-tac-toe is small consisting of totally 39 = 19683 combinations. Among them, many. combinations are illegal such as the one in which all 9 pieces are black. We exhaustively search ovei. the space according to a recursive simulation algorithm, in which: (1) the chessboard state is denoted. by an integer smaller than 19683. (2) every state corresponds to a 9-d vector, with each element can. take a value from this set {0-illegal, 1-black win, 2-white win, 4-tie, 5-uncertain}. We call this 9-d. vector a state transfer vector, denoting what will happen if the next legal piece placement happens at according location. (3) generated transfer vectors can predict the existence of a critical move tha1. will finish the game in advance. We will release this simulation code..\n(a) (b) c) (d) (e) (f) (g) (h) 1 (j) (k) (1) (m) (n) (o) (p) (q) (r) (s) (t)\nFigure 3: Class activation mapping results on our dataset\nblack and white pieces are equal the next move will be black side's and if there are one more black piece the next move will be white side's.\nAlthough the concepts of two sides and nine locations are coded into the labels, this kind of super vision is still weak supervision. Because what we are showing to the algorithm is just 18 abstrac categories as Fig2 shows. Could an algorithm figure out what it needs to do by observing these visual inputs? We think even for a human baby it is difficult because no concepts like this is a game or you need to find out how to win are provided. In the setting of deep reinforcement learning there is at least an objective of getting higher score to pursue.\nAs mentioned before, the method we exploit is to train a classification network on this rendered dataset (Fig[2) and analyze learnt representations with the technique of class activation mapping. As Zhou et al.(2016) suggests, we add one global average pooling layer after the last convolutional. layer of a pre-trained AlexNet model. All fully connected layers of the AlexNet model are discarded.. and a new fully connected layer is added after the global average pooling layer. After the new. classification network is fine-tuned on our dataset, a CAM visualization is generated by weighting. the outputs of the last convolutional layer with parameters from the added fully connected layer. Our. CAM implementation is built upon Marvin and it will be released..\nDue to the simplicity of this classification task, the top one classification accuracy is 10o% (no. surprisingly). Class activation mapping results are provided in Fig[3|and here we present the reasons. why we claim concepts are learnt: (1) We provide 18 abstract categories, but in order to classif visual inputs into these 18 categories the network's attention is roughly fixed upon chessboard grids\n(a) (b) (c) (d) (e) (f)\nFigure 4: Class activation mapping results after grid lines are added\nThis means the concept of grid emerges in the learnt representation. (2) If we place a piece at th. most activated location in Fig[3] that will be the right (and legal) move to finish the game. Or. one hand, this means the concept of winning rule emerges in the learnt representation. On the. other hand, this means this learnt concept can be used to deal with un-taught task (analogous to|Jia. et al.[(2013)Lake et al.(2015) Higgins et al.[(2016) who use generalization ability to illustrate tha concepts are learnt). (3) As Fig|3cehijnpq show, both sides can win in the next move if we violate. the take-turns rule. However, the network pays attention to the right location that is consistent tc. the rule. For example, in Fig3j, it seems that placing a black piece at the left-top location will alsc. end the game. However, this move will violate the rule because there are already more black piece. than white pieces meaning that this is the white side's turn. This means that the concept of two side.. emerges in learnt representation."}, {"section_index": "7", "section_name": "EXPERIMENT II: ADDING GRID LINES", "section_text": "Since we claim complicated concepts emerge in learnt visual representations, a natural questior will be: if the chessboard's and pieces' appearances are changed does this experiment still work? Thus we design this experiment by adding grid lines to the chessboards when rendering synthetic data (Fig4. The intentions behind this design is three-folded: (1) in this case, the chessboard's appearance is changed. (2) after these lines are added, the concept that there is a chessboard grid is actually implied. Still, we do not think these lines directly provide the concept of chessboard gric thus we use the word imply. Whether the network can figure out what these lines mean still remair\nExcept for learnt concepts, we analyze what this experiment provides for the remaining two ques- tions. To the second question: results in Fig|3|show that the methodology of generating labels from one modality (state transfer vectors in our case) to supervise another modality is still applicable More importantly, we use images as inputs yet the learnt visual representations contain not only visual saliency information but also untold chess game concepts. To the third question: as Fig|3 shows, most activated regions are empty spaces on the chessboard.\nX X (a) (b) . x x x (c) (d)\nFigure 5: Class activation map ping results after piece ap earance is changed\nuncertain. (3) those locations that are completely empty in Experiment I are no longer empty fron the perspective of information (still empty from the perspective of game rule).\nWe train the same network on the newly rendered dataset with grid lines and calculate CAM results in the same way. The results are demonstrated by Fig4 Generally speaking, the grid lines allow the network to better activate at the location of right move, making them stands out more on the heatmap What does this mean to the three intentions mentioned in last paragraph? (1) Firstly, it shows that our experiment is robust to chess board appearance variance. (2) Secondly, after implying the concep1 that there is a chessboard grid, the network performs better at paying attention to the location of right move. Again we compare this phenomenon against how a human baby learns. Although no supported by phycological experiment, we think with a chessboard grid a human baby is more easy to figure out the game rule than without. (3) Thirdly, heatmap changes in Fig4 is not surprising because after adding those lines, the empty (from the perspective of game rule) regions contain more gradients for lower layers of a CNN to collect. However, again it supports that activating ai non-salient regions is NOT trivial.\nIn order to further demonstrate the non-triviality of the model behaviors, we design this experiment We train on the dataset in Experiment I with 1000 iterations and snap-shotted the parameters at 500th iteration. The classification accuracy is 100% at 1000th iteration and 53.13% at 500th iteration. The\nIn this experiment we change the appearance of the piece by: (1) replacing black boxes with white circles; (2) replacing white boxes with black crosses. Note that in this case the white side moves first. Again we train the same network and visualize with CAM. The results comparison is provided in Fig[6l Further we add grid lines to the cross/circle chessboard.\n(a) (b) (c) (d) (e) (f)\nFigure 6: Class activation mapping results on true positive samples at 500 iterations (left, 53.13% accuracy) and 1000 iterations (right, 100% accuracy)\nmost activated patch consistency correlation ideal representation > wrong low > right high (a) representation accuracy (b) representation consistency\nFigure 7: We propose two quantitative evaluation protocols: (a) by selecting the most activated. patch, we calculate how frequent the representation fire at the correct location; (b) we correlate the representation with an ideal activation map.\nWe propose two different quantitative evaluation protocols. The first one is representation accuracy. (RAC). for which we select the most activated patch and examine whether it is the correct locatior to end the game. The second one is representation consistency (RCO), which correlates the normal-. ized representation and a normalized ideal activation map. The quantitative comparisons are showr. in Table [1] in which NAC stands for network classification accuracy. These results quantitatively. support that: (1) learnt representation can be used to predict the right move at an over 70% accuracy. (2) adding grid lines (implying the concept of a chessboard) dramatically improves localization .\nThe core experiment in this paper is to train a classification CNN on rendered chessboard images. under weak labels. After class activation mapping visualization, we analyse and interpret the results\nCAM results are shown by Fig 5|in which all samples are true positives. We think it shows that. there are two ways to achieve this classification task: (1) by paying attention to the visual patterns. formed by the existing pieces; (2) by paying attention to where the next piece should be placed. This. experiment shows that at an earlier stage of learning the model's behavior is consistent to the first. hypothesis and after the training is completely done the network can finally fire at correct location."}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "Samuel Albanie and Andrea Vedaldi. Learning grimaces by watching tv. In BMVC, 2016\nLluis Castrejon, Yusuf Aytar, Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. Learning aligned cross-modal representations from weakly aligned data. In CVPR, 2016.\nSaurabh Gupta, Judy Hoffman, and Jitendra Malik. Cross modal distillation for supervision transfe. In CVPR, 2016.\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo lutional neural networks. In NIPS, 2012..\nBrenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through probabilistic program induction. In Science, 2015..\nKaren Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks Visualising image classification models and saliency maps. 2014.\nJingjing Yang, Yuanning Li, Yonghong Tian, Ling-Yu Duan, and Wen Gao. Per-sample multiple kernel approach for visual concept learning. In Journal on Image and Video Processing, 2010.\nBolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Object detectors emerge in deep scene cnns. In ICLR, 2015.\nBolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning dee features for discriminative localization. In CVPR, 2016.\nShiai Zhu, Gang Wang, Chong-Wah Ngo, and Yu-Gang Jiang. On the sampling of web images fo learning visual concept classifiers. In Proceedings of the ACM International Conference on Imag and Video Retrieval, 2010.\nExperiment 1 I1 III III IV original grid piece piece+grid 500th NAC (%) 100.00 100.00 100.00 100.00 53.13 RAC (%) 71.82 97.25 83.77 99.00 27.87 RCO (103) -8.096 -5.115 -7.751 -4.9321 -10.610\nin three different backgrounds. Although simple, we argue that our results are enough to show that: (1) a CNN can automatically figure out complicated game rule concepts in this case. (2) cross-modal supervision for representation learning is still applicable in this case of higher-level semantics. (3) the technique of CAM can activate at non-salient regions, testifying CNN's capability to collect information from context in an extreme case (only context has information)."}] |
ryb-q1Olg | [{"section_index": "0", "section_name": "RECTIFIED FACTOR NETWORKS FOR BICLUSTERING", "section_text": "Djork-Arne Clevert, Thomas Unterthiner, & Sepp Hochreiter\nBiclustering is evolving into one of the major tools for analyzing large dataset. given as matrix of samples times features. Biclustering has several noteworthy applications and has been successfully applied in life sciences and e-commerce for drug design and recommender systems, respectively.\ngiven as matrix of samples times features. Biclustering has several noteworthy applications and has been successfully applied in life sciences and e-commerce for drug design and recommender systems, respectively. FABIA is one of the most successful biclustering methods and is used by compa- nies like Bayer, Janssen, or Zalando. FABIA is a generative model that represents each bicluster by two sparse membership vectors: one for the samples and one for the features. However, FABIA is restricted to about 20 code units because of the high computational complexity of computing the posterior. Furthermore, code units are sometimes insufficiently decorrelated. Sample membership is difficult to determine because vectors do not have exact zero entries and can have both large positive and large negative values.\nOn 400 benchmark datasets with artificially implanted biclusters, RFN signifi cantly outperformed 13 other biclustering competitors including FABIA. In bi- clustering experiments on three gene expression datasets with known clusters that were determined by separate measurements, RFN biclustering was two times sig nificantly better than the other 13 methods and once on second place. On data of the 1000 Genomes Project, RFN could identify DNA segments which indicate, that interbreeding with other hominins starting already before ancestors of modern humans left Africa."}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "We propose to use the recently introduced unsupervised Deep Learning approach. Rectified Factor Networks (RFNs) to overcome the drawbacks of existing bi clustering methods. RFNs efficiently construct very sparse, non-linear, high. dimensional representations of the input via their posterior means. RFN learning. s a generalized alternating minimization algorithm based on the posterior regu-. larization method which enforces non-negative and normalized posterior means.. Each code unit represents a bicluster, where samples for which the code unit is. active belong to the bicluster and features that have activating weights to the code. init belong to the bicluster..\nBiclustering is widely-used in statistics (A. Kasim & Talloen, 2016), and recently it also became oopular in the machine learning community (O' Connor & Feizi, 2014; Lee et al., 2015; Kolar et al., 2011), e.g., for analyzing large dyadic data given in matrix form, where one dimension are the samples and the other the features. A matrix entry is a feature value for the according sample A bicluster is a pair of a sample set and a feature set for which the samples are similar to each other on the features and vice versa. Biclustering simultaneously clusters rows and columns of a natrix. In particular, it clusters row elements that are similar to each other on a subset of columr elements. In contrast to standard clustering, the samples of a bicluster are only similar to each other on a subset of features. Furthermore, a sample may belong to different biclusters or to no bicluster at all. Thus, biclusters can overlap in both dimensions. For example, in drug design biclusters are compounds which activate the same gene module and thereby indicate a side effect. In this example different chemical compounds are added to a cell line and the gene expression is measured (Verbist et al., 2015). If multiple pathways are active in a sample, it belongs to different biclusters and may\nhave different side effects. In e-commerce often matrices of costumers times products are available where an entry indicates whether a customer bought the product or not. Biclusters are costumers. which buy the same subset of products. In a collaboration with the internet retailer Zalando the biclusters revealed outfits which were created by customers which selected certain clothes for a particular outfit.\nHowever, FABIA has shortcomings, too. A disadvantage of FABIA is that it is only feasible witl about 20 code units (the biclusters) because of the high computational complexity which depend cubically on the number of biclusters, i.e. the code units. If less code units were used, only the large and common input structures would be detected, thereby, occluding the small and rare ones Another shortcoming of FABIA is that units are insufficiently decorrelated and, therefore, multiple units may encode the same event or part of it. A third shortcoming of FABIA is that the membership vectors do not have exact zero entries, that is the membership is continuous and a threshold have tc be determined. This threshold is difficult to adjust. A forth shortcoming is that biclusters can have large positive but also large negative members of samples (that is positive or negative code values) In this case it is not clear whether the positive pattern or the negative pattern has been recognized.\nRectified Factor Networks (RFNs; (Clevert et al., 2015)) RFNs overcome the shortcomings of FABIA. The first shortcoming of only few code units is avoided by extending FABIA to thousands of code units. RFNs introduce rectified units to FABIA's posterior distribution and, thereby, allow for fast computations on GPUs. They are the first methods which apply rectification to the posterior distribution of factor analysis and matrix factorization, though rectification it is well established in Deep Learning by rectified linear units (ReLUs). RFNs transfer the methods for rectification from the neural network field to latent variable models. Addressing the second shortcoming of FABIA, RFNs achieve decorrelation by increasing the sparsity of the code units using dropout from field of Deep Learning. RFNs also address the third FABIA shortcoming, since the rectified posterior means yield exact zero values. Therefore, memberships to biclusters are readily obtained by values that are not zero. Since RFNs only have non-negative code units, the problem of separating the negative from the positive pattern disappears.\nWe propose to use the recently introduced Rectified Factor Networks (RFNs; (Clevert et al., 2015). for biclustering to overcome the drawbacks of the FABIA model. The factor analysis model and the. construction of a bicluster matrix are depicted in Fig. 1. RFNs efficiently construct very sparse, non-. linear, high-dimensional representations of the input. RFN models identify rare and small events in. the input, have a low interference between code units, have a small reconstruction error, and explair the data covariance structure.\nABIA (factor analysis for bicluster acquisition, (Hochreiter et al., 2010)) evolved into one of the nost successful biclustering methods. A detailed comparison has shown FABIA's superiority ove xisting biclustering methods both on simulated data and real-world gene expression data (Hochre ter et al., 2010). In particular FABIA outperformed non-negative matrix factorization with sparse ess constraints and state-of-the-art biclustering methods. It has been applied to genomics, where i dentified in gene expression data task-relevant biological modules (Xiong et al., 2014). In the large lrug design project QSTAR, FABIA was used to extract biclusters from a data matrix that contains ioactivity measurements across compounds (Verbist et al., 2015). Due to its successes, FABIA as become part of the standard microarray data processing pipeline at the pharmaceutical company anssen Pharmaceuticals. FABIA has been applied to genetics, where it has been used to identify DNA regions that are identical by descent in different individuals. These individuals inherited ar BD region from a common ancestor (Hochreiter, 2013; Povysil & Hochreiter, 2014). FABIA is a generative model that enforces sparse codes (Hochreiter et al., 2010) and, thereby, detects biclus ers. Sparseness of code units and parameters is essential for FABIA to find biclusters, since only ew samples and few features belong to a bicluster. Each FABIA bicluster is represented by twc nembership vectors: one for the samples and one for the features. These membership vectors are oth sparse since only few samples and only few features belong to the bicluster.\nW hT W * hI 000000012345000 0 n3 I1 2 n4 0 0 0 0 0 0 0 0 W22 0 0 0 0 0 0 0 0 0 0 0 5 0 0 0 2 0 0 100 0 0 3 0 12 15 0 0 W11 4 * 0 2 16 20 0 0 0 0 0 0 0 U1 U2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 E1 E2 0 0 0 0 0 0 0 0 (a) Factor analysis model. (b) Outer product of two sparse vectors.\nFigure 1: Left: Factor analysis model: hidden units (factors) h, visible units v, weight matrix W noise e. Right: The outer product w hT of two sparse vectors results in a matrix with a bicluster Note that the non-zero entries in the vectors are adjacent to each other for visualization purposes Only.\nRFN learning is a generalized alternating minimization algorithm derived from the posterior reg. ularization method which enforces non-negative and normalized posterior means. These posteriol means are the code of the input data. The RFN code can be computed very efficiently. For non Gaussian priors, the computation of the posterior mean of a new input requires either to numerically solve an integral or to iteratively update variational parameters. In contrast, for Gaussian priors the posterior mean is the product between the input and a matrix that is independent of the input. RFNs use a rectified Gaussian posterior, therefore, they have the speed of Gaussian posteriors but lead to sparse codes via rectification. RFNs are implemented on GPUs..\nThe RFN model is a factor analysis model\nwhich extracts the covariance structure of the data. The prior h ~ N (0, I) of the hidden units (fac tors) h E R' and the noise e ~ N (0, ) of visible units (observations) v E Rm are independent. The model parameters are the weight (factor loading) matrix W E Rml and the noise covariance matrix E Rm X m\nRFN models are selected via the posterior regularization method (Ganchev et al., 20i0). For data v} ={V1. . , Vn.}, it maximizes the objective F:.\nn n F ) log ) DkL(Q(hivi) p(hivi)) n n i=1 i=1\np)i = (I + WT-lW)-'WTy-1vi , , = (I + WTy-lW)\nv = Wh+e,\nFor rectified Gaussian posterior distributions, , remains as in the Gaussian case, but minimizing the second DkL of Eq. (2) leads to constrained optimization problem (see Clevert et al. (2015))\n1 n min Hi n i=1 1 n ) =1 s.t. V:>0,V n i=1\nFor a RFN model, each code unit represents a bicluster, where samples, for which the code unit is active, belong to the bicluster. On the other hand features that activates the code unit belong to the bicluster, too. The vector of activations of a unit across all samples is the sample membership vector The weight vector which activates the unit is the feature membership vector. The un-constraint posterior mean vector is computed by multiplying the input with a matrix according to Eq. (3). The constraint posterior of a code unit is obtained by multiplying the input by a vector and subsequently rectifying and normalizing the code unit (Clevert et al., 2015).\nTo keep feature membership vector sparse, we introduce a Laplace prior on the parameters. There- fore only few features contribute to activating a code unit, that is, only few features belong to a bicluster. Sparse weights W, are achieved by a component-wise independent Laplace prior for the. weights:\nn n I1 e- 2|Wki W) = k=1\nThe weight update for RFN (Laplace prior on the weights) is\nW = W + n(U S-1 W) - a sign(W)\nWhereby the sparseness of the weight matrix can be controlled by the hyper-parameter a and U to enforce more sparseness of the sample membership vectors, we introduce dropout of code units. Dropout means that during training some code units are set to zero at the same time as they get rectified. Dropout avoids co-adaptation of code units and reduces correlation of code units - a problem of FABIA which is solved.\nRFN biclustering does not require a threshold for determining sample memberships to a bicluste. since rectification sets code units to zero. Further crosstalk between biclusters via mixing up negative and positive memberships is avoided, therefore spurious biclusters do less often appear..\nIn this section, we will present numerical results on multiple synthetic and real data sets to verify the performance of our RFN biclustering algorithm, and compare it with various other biclustering methods."}, {"section_index": "2", "section_name": "3.1 METHODS COMPARED", "section_text": "To assess the performance of rectified factor networks (RFNs) as unsupervised biclustering methoc we compare the following 14 biclustering methods:.\n(1) RFN: rectified factor networks (Clevert et al., 2015), (2) FABIA: factor analysis with Laplace prior on the hidden units (Hochreiter et al., 2010; Hochreiter, 2013), (3) FABIAS: factor analysis with sparseness projection (Hochreiter et al., 2010), (4) MFsC: matrix factorization with sparse- ness constraints (Hoyer, 2004), (5) plaid: plaid model (Lazzeroni & Owen, 2002; T. Chekouo & Raffelsberger, 2015), (6) ISA: iterative signature algorithm (Ihmels et al., 2004), (7) OPsM: order- preserving sub-matrices (Ben-Dor et al., 2003), (8) SAMBA: statistical-algorithmic method for bi cluster analysis (Tanay et al., 2002), (9) xMOTIF: conserved motifs (Murali & Kasif, 2003), (10) Bimax: divide-and-conquer algorithm (Prelic et al., 2o06), (11) CC: Cheng-Church -biclusters\nwhere \">\" is component-wise. In the E-step of the generalized alternating minimization algorithm (Ganchev et al., 2010), which is used for RFN model selection, we only perform a step of the gradi- ent projection algorithm (Bertsekas, 1976; Kelley, 1999), in particular a step of the projected Newton. method for solving Eq. (4) (Clevert et al., 2015). Therefore, RFN model selection is extremely effi. cient but still guarantees the correct solution..\n(Cheng & Church, 2000), (12) plaid_t: improved plaid model (Turner et al., 2003), (13) FLOC flexible overlapped biclustering, a generalization of CC (Yang et al., 2005), and (14) spec: spectra biclustering (Kluger et al., 2003).\nFor a fair comparison, the parameters of the methods were optimized on auxiliary toy data sets. If more than one setting was close to the optimum, all near optimal parameter settings were tested. In the following, these variants are denoted as method_variant (e.g. plaid.ss). For RFN we used the following parameter setting: 13 hidden units, a dropout rate of 0.1, 500 iterations with a learning rate of 0.1, and set the parameter a (controlling the sparseness on the weights) to O.01."}, {"section_index": "3", "section_name": "3.2 SIMULATED DATA SETS WITH KNOWN BICLUSTERS", "section_text": "In the following subsections, we describe the data generation process and results for synthetically generated data according to either a multiplicative or additive model structure..\nWe assumed n = 1000 genes and l = 100 samples and implanted p = 10 multiplicative biclusters The bicluster datasets with p biclusters are generated by following model:.\np X = i=1\nwhere Y E Rnxl is additive noise; , E Rn and z; E R' are the bicluster membership vectors for the i-th bicluster. The X,'s are generated by (i) randomly choosing the number N, of genes ir bicluster i from {10, ..., 210}, (ii) choosing N genes randomly from {1,..., 1000}, (iii) setting X, components not in bicluster i to N(0, 0.22) random values, and (iv) setting X, components that are in bicluster i to N(3, 1) random values, where the sign is chosen randomly for each gene The z,'s are generated by (i) randomly choosing the number N? of samples in bicluster i from {5, ..., 25}, (ii) choosing N? samples randomly from {1, ..., 100}, (iii) setting z; components not in bicluster i to N(0, 0.22) random values, and (iv) setting z; components that are in bicluster i to N(2, 1) random values. Finally, we draw the Y entries (additive noise on all entries) according to N(0, 32) and compute the data X according to Eq. (7). Using these settings, noisy biclusters of random sizes between 10 5 and 210 25 (genes samples) are generated. In all experiments, rows (genes) were standardized to mean O and variance 1.\nIn this experiment we generated biclustering data where biclusters stem from an additive two-wa ANOVA model:\nX=0Oz)+r0ikj=i+Qik+ i=1"}, {"section_index": "4", "section_name": "3.2.3 RESULTS ON SIMULATED DATA SETS", "section_text": "vhere O is the element-wise product of matrices and both X, and z, are binary indicator vectors which indicate the rows and columns belonging to bicluster i. The i-th bicluster is described by an ANOVA model with mean i, k-th row effect Qik (first factor of the ANOVA model), and j- th column effect i; (second factor of the ANOVA model). The ANOVA model does not have interaction effects. While the ANOVA model is described for the whole data matrix, only the effects on rows and columns belonging to the bicluster are used in data generation. Noise and bicluster sizes are generated as in previous Subsection 3.2.1.\nData was generated for three different signal-to-noise ratios which are determined by distribution from which ; is chosen: A1 (low signal) N(0,22), A2 (moderate signal) N(2, 0.52), and A3 (high signal) W(4, 0.52), where the sign of the mean is randomly chosen. The row effects Qki are chosen from (0.5, 0.22) and the column effects ; are chosen from N(1, 0.52).\nFor method evaluation, we use the previously introduced biclustering consensus score for two sets of biclusters (Hochreiter et al., 2010), which is computed as follows:.\nStep (3) penalizes different numbers of biclusters in the sets. The highest consensus score is 1 anc only obtained for identical sets of biclusters..\nTable 1 shows the biclustering results for these data sets. RFN significantly outperformed all othe methods (t-test and McNemar test of correct elements in biclusters)."}, {"section_index": "5", "section_name": "3.3 GENE EXPRESSION DATA SETS", "section_text": "In this experiment, we test the biclustering methods on gene expression data sets, where the bi clusters are gene modules. The genes that are in a particular gene module belong to the according bicluster and samples for which the gene module is activated belong to the bicluster. We consider three gene expression data sets which have been provided by the Broad Institute and were previ- ously clustered by Hoshida et al. (2007) using additional data sets. Our goal was to study how well biclustering methods are able to recover these clusters without any additional information.\nIn this experiment, we test the biclustering methods on gene expression data sets, where the bi clusters are gene modules. The genes that are in a particular gene module belong to the according bicluster and samples for which the gene module is activated belong to the bicluster. We consider three gene expression data sets which have been provided by the Broad Institute and were previ. ously clustered by Hoshida et al. (2007) using additional data sets. Our goal was to study how wel. biclustering methods are able to recover these clusters without any additional information.. (A) The \"breast cancer\" data set (van't Veer et al., 2002) was aimed at a predictive gene signature for the outcome of a breast cancer therapy. We removed the outlier array S54 which leads to a data set with 97 samples and 1213 genes. In Hoshida et al. (2007), three biologically meaningful sub classes were found that should be re-identified.. (B) The \"multiple tissue types\" data set (Su et al., 2002) are gene expression profiles from humar. cancer samples from diverse tissues and cell lines. The data set contains 102 samples with 5565 genes. Biclustering should be able to re-identify the tissue types.. (C) The \"diffuse large-B-cell lymphoma (DLBCL)\" data set (Rosenwald et al., 2002) was aimed a predicting the survival after chemotherapy. It contains 180 samples and 661 genes. The three classes found by Hoshida et al. (2007) should be re-identified..\nTable 1: Results are the mean of 100 instances for each simulated data sets. Data sets M1 and A1. A3 were multiplicative and additive bicluster, respectively. The numbers denote average consensus scores with the true biclusters together with their standard deviations in parentheses. The best results are printed bold and the second best in italics (\"better\"' means significantly better according to both a paired t-test and a McNemar test of correct elements in biclusters).\nmultiplic. model additive model Method M1 A1 A2 A3 RFN 0.6437e-4 0.4759e-4 0.6401e-2 0.8166e-7 FABIA 0.4781e-2 0.1096e-2 0.1968e-2 0.4751e-1 FABIAS 0.5643e-3 0.1507e-2 0.2687e-2 0.5461e-1 SAMBA 0.0065e-5 0.0026e-4 0.0025e-4 0.0038e-4 xMOTIF 0.0026e-5 0.0024e-4 0.0024e-4 0.0014e-4 MFSC 0.0572e-3 0.0000e-0 0.0000e-0 0.0000e-0 Bimax 0.0042e-4 0.0098e-3 0.0109e-3 0.0141e-2 plaid_ss 0.0459e-4 0.0392e-2 0.0411e-2 0.0743e-2 CC 0.0017e-6 4e-43e-4 3e-42e-4 1e-41e-4 plaid_ms 0.0724e-4 0.0643e-2 0.0722e-2 0.1123e-2 plaid_t_ab 0.0465e-3 0.0212e-2 0.0056e-3 0.0222e-2 plaid_ms_5 0.0836e-4 0.0984e-2 0.1434e-2 0.2215e-2 plaid_t_a 0.0374e-3 0.0393e-2 0.0109e-3 0.0514e-2 FLOC 0.0063e-5 0.0059e-4 0.0051e-3 0.0039e-4 ISA 0.3335e-2 0.0394e-2 0.0332e-2 0.1407e-2 spec 0.0325e-4 0.0000e-0 0.0000e-0 0.0000e-0 OPSM 0.0121e-4 0.0072e-3 0.0072e-3 0.0082e-3\n1. Compute similarities between all pairs of biclusters by the Jaccard index, where one is from the first set and the other from the second set; 2. Assign the biclusters of one set to biclusters of the other set by maximizing the assignment by the Munkres algorithm; 3. Divide the sum of similarities of the assigned biclusters by the number of biclusters of the larger set.\n(A) The \"breast cancer\" data set (van't Veer et al., 2002) was aimed at a predictive gene signature for the outcome of a breast cancer therapy. We removed the outlier array S54 which leads to a data set with 97 samples and 1213 genes. In Hoshida et al. (2007), three biologically meaningful sub. classes were found that should be re-identified\nFor methods assuming a fixed number of biclusters, we chose five biclusters - slightly higher than the number of known clusters to avoid biases towards prior knowledge about the number of actual clusters. Besides the number of hidden units (biclusters) we used the same parameters as described in Sec. 3.1. The performance was assessed by comparing known classes of samples in the data sets with the sample sets identified by biclustering using the consensus score defined in Subsection 3.2.3 - here the score is evaluated for sample clusters instead of biclusters. The biclustering results are summarized in Table 2. RFN biclustering yielded in two out of three datasets significantly better results than all other methods and was on second place for the third dataset (significantly according o aNcNemar test of cot cect samnles in clusters."}, {"section_index": "6", "section_name": "3.4 10O0 GENOMES DATA SETS", "section_text": "In this experiment, we used RFN for detecting DNA segments that are identical by descent (IBD). A DNA segment is IBD in two or more individuals, if they have inherited it from a common ancestor. that is, the segment has the same ancestral origin in these individuals. Biclustering is well-suitec. to detect such IBD segments in a genotype matrix (Hochreiter, 2013; Povysil & Hochreiter, 2014). which has individuals as row elements and genomic structural variations (SNVs) as column ele ments. Entries in the genotype matrix usually count how often the minor allele of a particular SNV. is present in a particular individual. Individuals that share an IBD segment are similar to each othe. because they also share minor alleles of SNVs (tagSNVs) within the IBD segment. Individuals tha share an IBD segment represent a bicluster.\nFor our IBD-analysis we used the next generation sequencing data from the 1o0o Genomes Phase 3. This data set consists of low-coverage whole genome sequences from 2,504 individuals of the main continental population groups (Africans (AFR), Asians (ASN), Europeans (EUR), and Ad- mixed Americans (AMR)). Individuals that showed cryptic first degree relatedness to others were removed, so that the final data set consisted of 2,493 individuals. Furthermore, we also includec archaic human and human ancestor genomes, in order to gain insights into the genetic relationships between humans, Neandertals and Denisovans. The common ancestor genome was reconstructed from human, chimpanzee, gorilla, orang-utan, macaque, and marmoset genomes. RFN IBD detec-\nTable 2: Results on the (A) breast cancer, (B) multiple tissue samples, (C) diffuse large-B-cell. lymphoma (DLBCL) data sets measured by the consensus score. An \"nc\"' entry means that the. method did not converge for this data set. The best results are in bold and the second best in italics (\"better' means significantly better according to a McNemar test of correct samples in clusters).. The columns \"#bc', \"#g\", \"#s\" provide the numbers of biclusters, their average numbers of genes and their average numbers of samples, respectively. RFN is two times the best method and once on. second place.\n(A) breast cancer (B) multiple tissues (C) DLBCL method score #bc #g #s score #bc #g #s #bc #g score #s RFN 0.57 3 73 31 0.77 5 75 33 0.35 2 59 72 FABIA 0.52 3 92 31 0.53 5 356 29 0.37 2 59 62 FABIAS 0.52 3 144 32 0.44 5 435 30 0.35 2 104 60 MFSC 0.17 5 87 24 0.31 5 431 24 0.18 5 50 42 plaid_ss 0.39 5 500 38 0.56 5 1903 35 0.30 5 339 72 plaid_ms 0.39 5 175 38 0.50 571 42 0.28 5 143 63 plaid_ms_5 0.29 5 56 29 0.23 5 71 26 0.21 5 68 47 ISA_1 0.03 25 55 4 0.05 29 230 6 0.01 56 26 8 OPSM 0.04 12 172 8 0.04 19 643 12 0.03 6 162 4 SAMBA 0.02 38 37 7 0.03 59 53 8 0.02 38 19 15 xMOTIF 0.07 5 61 6 0.11 5 628 6 0.05 5 9 9 Bimax 0.01 1 1213 97 0.10 4 35 5 0.07 5 73 5 CC 0.11 5 12 12 0.05 5 10 10 nc nc nc nc plaid_t_ab 0.24 2 40 23 0.38 5 255 22 0.17 1 3 44 plaid_t_a 0.23 2 24 20 0.39 5 274 24 0.11 3 6 24 0.12 13 198 spec 28 0.37 5 395 20 0.05 28 133 32 FLOC 0.04 5 343 5 nc nc nc nc 0.03 5 167 5\nHG00114_GBR HG00121_GBR HG00131_GBR HG00133_GBR HG00137_GBR HG00149_GBR HG00258_GBR HG00265_GBR HG00325_FIN HG00373_FIN HG01051_PUR HG01060_PUR HG01148_CLM HG01171_PUR HG01204_PUR HG01390_CLM HG01626_IBS NA07000_CEU NA11829_CEU NA11919_CEU NA11993_CEU NA12144_CEU NA12144_CEU NA12275_CEU NA12282_CEU NA12341_CEU NA19707_ASW NA19783_MXL NA19788_MXL NA19900_ASW NA20334_ASW NA20513_TSI NA20515_TSI NA20520_TSI NA20520_TSI NA20534_TSI NA20585_TSI NA20765_TSI NA20795_TSI model L II Ancestor Neandertal II Denisova 11,872,737 11,877,884 11,883,084 11,888,284 11,893,484 11,896,70\nFigure 2: Example of an IBD segment matching the Neandertal genome shared among multiple. populations. The rows give all individuals that contain the IBD segment and columns consecutive. SNVs. Major alleles are shown in yellow, minor alleles of tagSNVs in violet, and minor alleles ol other SNVs in cyan. The row labeled model L indicates tagSNVs identified by RFN in violet. The. rows Ancestor, Neandertal, and Denisova show bases of the respective genomes in violet if they. match the minor allele of the tagSNVs (in yellow otherwise). For the Ancestor genome we used the. reconstructed common ancestor sequence that was provided as part of the 1o0o Genomes Project. data.\ntion is based on low frequency and rare variants, therefore we removed common and private variants prior to the analysis. Afterwards, all chromosomes were divided into intervals of 10,o0o variants. with adjacent intervals overlapping by 5.O00 variants.\nIn the data of the 1o0o Genomes Project, we found IBD-based indications of interbreeding between ancestors of humans and other ancient hominins within Africa (see Fig. 2 as an example of an IBD segment that matches the Neandertal genome).."}, {"section_index": "7", "section_name": "4 CONCLUSION", "section_text": "We have introduced rectified factor networks (RFNs) for biclustering and benchmarked it with 13 other biclustering methods on artificial and real-world data sets\nOn 400 benchmark data sets with artificially implanted biclusters, RFN significantly outperformed. all other biclustering competitors including FABIA. On three gene expression data sets with pre viously verified ground-truth, RFN biclustering yielded twice significantly better results than all other methods and was once the second best performing method. On data of the 1ooo Genomes Project, RFN could identify IBD segments which support the hypothesis that interbreeding between. ancestors of humans and other ancient hominins already have taken place in Africa.\nRFN biclustering is geared to large data sets, sparse coding, many coding units, and distinct mem-. bership assignment. Thereby RFN biclustering overcomes the shortcomings of FABIA and has the potential to become the new state of the art biclustering algorithm..\nAcknowledgment. We thank the NVIDIA Corporation for supporting this research with several Titan X GPUs.\nL. Lazzeroni and A. Owen. Plaid models for gene e pression data. Stat. Sinica, 12(1):61-86, 2002\nT. M. Murali and S. Kasif. Extracting conserved gene expression motifs from gene expression data In Pac. Symp. Biocomputing, pp. 77-88, 2003.\nLuke O' Connor and Soheil Feizi. Biclustering using message passing. In Z. Ghahramani. M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger (eds.), Advances in Neural In formation Processing Systems 27, pp. 3617-3625. Curran Associates, Inc., 2014\nand High-Dimensional Data Using R. Chapman and Hall/CRC, 2016. A. Ben-Dor, B. Chor, R. Karp, and Z. Yakhini. Discovering local structure in gene expression data: the order-preserving submatrix problem. J. Comput. Biol., 10(3-4):373-384, 2003. D. P. Bertsekas. On the Goldstein-Levitin-Polyak gradient projection method. IEEE Trans. Automat. Control. 21:174-184. 1976 Y. Cheng and G. M. Church. Biclustering of expression data. In Proc. Int. Conf. on Intelligent Systems for Molecular Biology, volume 8, pp. 93-103, 2000. D.-A. Clevert, T. Unterthiner, A. Mayr, and S. Hochreiter. Rectified factor networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett (eds.), Advances in Neural Information Processing Systems 28. Curran Associates, Inc., 2015. K. Ganchev, J. Graca, J. Gillenwater, and B. Taskar. Posterior regularization for structured latent variable models. Journal of Machine Learning Research, 11:2001-2049, 2010. S. Hochreiter. HapFABIA: Identification of very short segments of identity by descent characterized by rare variants in large sequencing data. Nucleic Acids Res., 41(22):e202, 2013. doi: 10.1093/ nar/gkt1013. S. Hochreiter, U. Bodenhofer, M. Heusel, A. Mayr, A. Mitterecker, A. Kasim, S. VanSanden, D. Lin. W. Talloen, L. Bijnens, H. W. H. Gohlmann, Z. Shkedy, and D.-A. Clevert. FABIA: factor analysis for bicluster acquisition. Bioinformatics, 26(12):1520-1527, 2010. doi: 10.1093/bioinformatics/ btq227. Y. Hoshida, J.-P. Brunet, P. Tamayo, T. R. Golub, and J. P. Mesirov. Subclass mapping: Identifying common subtypes in independent disease data sets. PLoS ONE, 2(11):e1195, 2007. P. O. Hoyer. Non-negative matrix factorization with sparseness constraints. J. Mach. Learn. Res., 5: 1457-1469, 2004. J. Ihmels, S. Bergmann, and N. Barkai. Defining transcription modules using large-scale gene expression data. Bioinformatics, 20(13):1993-2003, 2004. C. T. Kelley. Iterative Methods for Optimization. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, 1999. Y. Kluger, R. Basri, J. T. Chang, and M. B. Gerstein. Spectral biclustering of microarray data: Coclustering genes and conditions. Genome Res., 13:703-716, 2003.\nJ. Ihmels, S. Bergmann, and N. Barkai. Defining transcription modules using large-scale gene expression data. Bioinformatics, 20(13):1993-2003, 2004. C. T. Kelley. Iterative Methods for Optimization. Society for Industrial and Applied Mathematics. (SIAM), Philadelphia, 1999. Y. Kluger, R. Basri, J. T. Chang, and M. B. Gerstein. Spectral biclustering of microarray data:. Coclustering genes and conditions. Genome Res.. 13:703-716. 2003\nTougnsyslennalc penTonancetests. Jl.lCl. TO L. J. van't Veer et al. Gene expression profiling predicts clinical outcome of breast cancer. Nature, 415:530-536, 2002. B. Verbist, G. Klambauer, L. Vervoort, W. Talloen, Z. Shkedy, O. Thas, A. Bender, H. W. H. Gohlmann, and S. Hochreiter. Using transcriptomics to guide lead optimization in drug discovery projects: Lessons learned from the QSTAR project. Drug Discovery Today, 20(5):505-513, 2015. ISSN 1359-6446. M. Xiong, B. Li, Q. Zhu, Y.-X. Wang, and H.-Y. Zhang. Identification of transcription factors for drug-associated gene modules and biomedical implications. Bioinformatics, 30(3):305-309, 2014. J. Yang, H. Wang, W. Wang, and P. S. Yu. An improved biclustering method for analyzing gene expression profiles. Int. J. Artif. Intell. T., 14(5):771-790, 2005."}] |
ByqiJIqxg | [{"section_index": "0", "section_name": "ONLINE BAYESIAN TRANSFER LEARNING FOR SEOUENTIAL DATA MODELING", "section_text": "Priyank Jaini1, Zhitang Chent, Pablo Carbajall, Edith Law1, Laura Middleton?. Kayla Regan?, Mike Schaekermann', George Trimponias4, James Tung3, Pascal Poupar\npjaini@uwaterloo.ca, chenzhitang2@huawei.com, pablo@veedata.io, {edith.law, lmiddlet, kregan}@uwaterloo.ca, g.trimponias@huawei.com, {mschaekermann, james.tung, ppoupart}@uwaterloo.ca 1 David R. Cheriton School of Computer Science, University of Waterloo, Ontario, Canada 2 Department of Kinesiology, University of Waterloo, Ontario, Canada 3 Dept. of Mechanical and Mechatronics Engineering, University of Waterloo, Ontario, Canada 4 Noah's Ark Laboratory, Huawei Technologies, Hong Kong, China\nWe consider the problem of inferring a sequence of hidden states associated with. a sequence of observations produced by an individual within a population. Instead. of learning a single sequence model for the population (which does not account for. variations within the population), we learn a set of basis sequence models based. on different individuals. The sequence of hidden states for a new individual is in- ferred in an online fashion by estimating a distribution over the basis models that best explain the sequence of observations of this new individual. We explain how. to do this in the context of hidden Markov models with Gaussian mixture models that are learned based on streaming data by online Bayesian moment matching. The resulting transfer learning technique is demonstrated with three real-word ap-. plications: activity recognition based on smartphone sensors, sleep classification based on electroencephalography data and the prediction of the direction of future. packet flows between a pair of servers in telecommunication networks.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "In several application domains, data instances are produced by a population of individuals that ex-. hibit a variety of different characteristics. For instance, in activity recognition, different individuals might walk or run with different gait patterns. Similarly, in sleep studies, different individuals might exhibit different patterns for the same sleep stages. In telecommunication networks, software ap-. plications might generate packet flows between two servers according to different patterns. In such. scenarios, it is tempting to treat the population as a homogeneous source of data and to learn a single average model for the population. However, this average model will perform poorly in recognition. tasks for individuals that differ significantly from the average. Hence, there is a need for transfer. learning techniques that take into account the variations between individuals within a population.\nWe consider the problem of inferring a sequence of hidden states based on a sequence of observa tions produced by an individual within a population. Our first contribution is an online Bayesian moment matching technique to estimate the parameters of a hidden Markov model (HMM) with observation distributions represented by Gaussian mixture models (GMMs). This approach allows us to learn separate basis models for different individuals based on streaming data. The second contribution is an unsupervised online technique that infers a probability distribution over the basis models that best explain the sequence of observations of a new individual. The classification of hidden states can then be refined in an online fashion based on the individuals that most resemble the new individual. Furthermore, since the basis models are fixed at classification time and we only learn the weight of each model, good classification accuracy can be obtained more quickly as the stream of observations of the new individual are processed. The third contribution of this work is the demonstration of this approach across different real-world applications, which include activity"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "recognition, sleep classification and the prediction of packet flow direction in telecommunication networks.\nThere is a large literature on transfer learning (Pan & Yang2010] Taylor & Stone2009] Shao et al. 2015, Cook et al.]2013). Depending on the problem, the input features, the output labels or the distribution over the features and the labels may be different for the source and target domains. In this work, we assume that the same input features are measured and the same output labels are in- ferred in the source and target domains. The main problem that we consider is subject variability within a population of individuals, which means that different individuals exhibit different distribu- tions over the features and the labels. The problem of subject variability has been studied in several papers.Chieu et al.(2006) describe how to augment conditional random fields with a subject hidden variable to obtain a mixture of conditional random fields that can naturally infer a distribution over the closest subjects in a training population when inferring the activities of a new individual based on physiological data.Rashidi & Cook(2009) proposed a data mining technique with a similar- ity measure to facilitate the transfer of activity recognition across different people.Chattopadhyay et al.(2011) describe a similarity measure with an intrinsic manifold that preserve the topology of surface electromyography (SEMG) while mitigating distributional differences among individuals. Zhao et al.[(2011) proposed a transfer learning technique that starts by training a decision tree to recognize the activities of a user based on smartphone accelerometry. The decision tree is gradu- ally adjusted to a new user by a clustering technique that successively re-weights the training data based on the unlabeled data of the new individual. These approaches mitigate subject variability by various offline transfer learning techniques. In contrast, we propose an online transfer learning technique since the applications that we consider exhibit sequences of observations that arrive in a streaming fashion and therefore require an online technique that can infer the hidden state of each observation as it arrives.\nIn the next section, we describe an online transfer learning technique for hidden Markov models witl. Gaussian mixture models. The approach learns different transition and emission models for eacl. individual in the training population. Those models are then treated as basis models to speed up th online learning process for new individuals. More specifically, a weighted combination of the basi. models is learned for each new individual. This idea is related to boosting techniques for transfe. learning (Dai et al.]2007]Yao & Doretto,2010; [A1-Stouhi & Reddy]2011) that estimate a weighte combination of base classifiers. However, note that we focus on sequence modeling problems wher. the classes of consecutive data points are correlated while transfer learning by boosting assumes tha. the data points are identically and independently distributed.."}, {"section_index": "3", "section_name": "3 BACKGROUND", "section_text": "In this section, we give a brief overview of hidden Markov models (HMMs) and review the Bayesian moment matching (BMM) algorithm in detail with an example. We will use both HMMs and BMM subsequently in our transfer learning algorithm described in Section 4"}, {"section_index": "4", "section_name": "3.1 HIDDEN MARKOV MODELS", "section_text": "In a hidden Markov model (HMM), each observation X, is associated with a hidden state Yt. Th Markov property states that the current state depends only on the previous state. HMMs have beer. widely used in domains involving sequential data like speech recognition, activity recognition, nat. ural language processing etc. An HMM is represented by two distributions.\nThe paper is organized as follows. Section2lreviews some related work on transfer learning. Sec- tion 3 provides some background regarding hidden Markov models Bayesian Moment Matching algorithm Gaussian mixture models. Section|4|describes the proposed online transfer learning tech- nique. Section|5Jillustrates the transfer learning technique in three real-world tasks: activity recogni-. tion, sleep stage classification and flow direction prediction. Finally, Section|6|concludes the paper. and discusses directions for future work..\nIn this paper, we will first estimate the parameters of the transition and emission distributions by Bayesian learning from a set of source domains (individuals). Subsequently, we will use these distributions as basis functions when estimating the transition and emission distributions of a targe domain in which we wish to predict the hidden state for each observation. Parameter learning of ar HMM using Bayesian learning is done by calculating the posterior over the parameters given a prior distribution.\nNote that Variational Bayes (VB) and Markov Chain Monte Carlo (MCMC) techniques can also be used for approximate Bayesian learning as an alternative to BMM. However, MCMC is difficult tc run in an online fashion. A recent comparison by Omar Omar (2016) showed that BMM achieve. better results than online Vational Bayes (oVB) Sato(20o1) and Stochastic Variational Inferenc (SVI) Wang et al.(2011) in the context of topic modeling. BMM was also shown to work bette. than other online techniques in several papers Rashwan et al.(2016); Hsu & Poupart(2016); Jain et al.(2016). This is due to the fact that BMM is naturally online and therefore does not require mini. batches. In contrast, in order to run in an online fashion Variational Bayes requires mini-batches anc a decreasing learning rate, however the size of the mini-batches and the decay procedure for the. learning rate require some fine tuning. In general, the use of mini-batches always leads to some. information loss since data in previous mini-batches is not accessible. BMM does not suffer fron this type of information loss and there is no batch size nor learning rate to fine tune. Hence, we will. adapt BMM to transfer learning in this work..\nwhere O = {(w1, 1, A-1), (w2, 2, A1), ...(wM, M, AM)} and M is known.\nSince a Normal-Wishart distribution is a conjugate prior for a Normal distribution with unknown mean and precision matrix, NW(,,A|d,k.\nTransition distribution: The transition distribution models the change in the value of the. hidden state over time. The distribution over the current state Yt given that the previous. state is Yt-1 = j is denoted by 0, = Pr(Yt|Yt-1 = j) where 0, = {01z, ..., 0N}, N is the. total number of states and 0i; = Pr(Yt = i|Yt-1 = j). Emission distribution: The emission distribution models the effect of the hidden state on the observation Xt at any given time t and is given by Pr(Xt[Yt). In this work, we model. the emission distribution as a mixture of Gaussians with M components, i.e., Pr(X|Yt =. j) = M M1 W;N(Xt; ,\nEmission distribution Transition Probability Prior for t 1 O,,Yt=j|Xt,Yt-1=i x Pr(Xt|Yt=j)Pr(Yt=j|Yt-1=i) Pr(O,,Yt-1=i|X1:t-1)\nThe Bayesian moment matching (BMM) algorithm for Gaussian Mixture Models was proposed by Jaini & Poupart(2016); Jaini et al.(2016). Exact Bayesian learning of mixture models based on streaming data is intractable because the number of terms in the posterior after observing each observation increases exponentially. BMM circumvents this issue by projecting the distribution of the exact posterior P on a tractable family of distributions P by matching a set of sufficient moments. In this section, we give a brief overview of the BMM algorithm with an example.\nM M X Dir(w|Q) ][NW(i,Ai|Oi,Ki,Wi,Vi)) )w;N(X1;j,A) i=1 j=1\nwhere &; = (a1, Q2, .., Q, .., QM) and Z is the normalization constant. The equation above sug-. gests that the posterior is a mixture of product of distributions where each product component in the summation has the same form as that of the family of distributions of the prior Po(O). It is evident that the terms in the posterior grow by a factor of M for each iteration, which is problematic. The Bayesian moment matching algorithm approximates this mixture Pi(O) with a single product. of Dirichlet and Normal-Wishart distributions Pi(O) by matching all the sufficient moments of P1. with P which belongs to the same family of distributions as the prior:.\nM Pi(O) = Dir(w|a) )I1 NW(,, Ai|8, K, W}, v i=1\nQ)Q;+1) Oli E[A] = vW; Var(Aij) =v(W?, +WuWjj K + E[] = d; E[(8)()T]"}, {"section_index": "5", "section_name": "4.1 MOTIVATION", "section_text": "Several applications produce data instances from a population of individuals that exhibit a variety of. different traits. For example, for the task of activity recognition, different individuals will have dif-. ferent gait patterns despite the fact that they are performing the same activity (e.g., walking, running.. standing, etc.). Therefore, it is problematic to make predictions in such domains by considering the. population to be homogeneous; however, every population will have individuals resembling each. other in some characteristics. This suggests that we can use individuals in a population to make pre-. dictions about similar individuals by identifying those individuals who closely resemble each other. However, identifying individuals with similar traits is not straightforward. Alternatively, weights. can be assigned to each individual in a population based on a target individual (individual on whom. predictions are to be made). All those individuals who resemble closely the target individual will re-. ceive higher weights than those with dissimilar traits. Subsequently, predictions about the behavior. of the target individual will be based mostly on the observed behavior of the similar individuals\nM M P1(O|X1) = c;Dir(w|Qj)NW(j,Aj|8j,Kj,Wj,Dj) ]]NW(i,Ai|8i,Ki,Wi,Vi) i+j\nUsing this set of equations, the exact posterior Pi(O) can be approximated with Pi(O). This pos terior will then be the prior for the next iteration and we keep following the steps above itera tively to finally have a distribution Pn(O) after observing a stream of data X1:n. The estimate is O = E[Pn(O)]. The exact calculations for the Bayesian Moment Matching algorithm are given in appendix A\nIn this section, we first motivate the need for an online transfer learning algorithm for sequential data modeling and then explain in detail the different steps of the algorithm. The complete algorithm is given in Alg. (1).\nOur transfer learning algorithm addresses precisely these issues. It has three main steps - first,. learns a model (transition and emission distributions) for each source domain (or individual in population) that best explains the observations of that source domain. Next, given a target domai. (or target individual), it identifies those individuals that closely resemble the target individual b. estimating a basis weight associated to each source domain. A higher weight for a source domai. implies that the corresponding individual resembles more closely the target individual. Finally,. predicts the hidden states for each observation in the target domain by using the models learned i. the source domain and the basis weights that are given to each transition and emission distributio. of the source domains. We now explain each step of the algorithm in detail below.."}, {"section_index": "6", "section_name": "4.2 SOURCE DOMAIN - TRAINING", "section_text": "The first step is to learn a model for each source domain in the training data. Suppose that we have labeled sequence data for K different source domains. Let\nM Pr(X|Yk=j)=)wN E., k VjE{1,2,..,N m=1\nOur aim is to learn the parameters characterizing the transition and the emission distribution for each source domain. More precisely, if.\nwhere\nthen we want to learn the parameters Ok for the transition distribution and k for the emission distribution for each source domain k E {1, 2, ..., K}. Since, we use a hidden Markov model, the update equation at each time step for a source domain k is.\nN N M Pr(Ok,k) = I Dir(0k|a$) Dir(w}; 3h) NW(k,. A i=1 j=1 u=1\nM N =j|Xf,Yk1=i)xwjmN(X} I Dir(0h| m i=1 N M I Dir(w};3;) NW( E1,2,...,. j=1 u=1\n- hidden state label at time step t for source domain k - feature vector at time step t for source domain k\nWe denote the transition matrix for the kth source domain with Ok. Let the emission distribution be modeled by a mixture of Gaussian with M components. This implies\nEmission distribution Transition Probability Prior for t - 1 PrO,,YF=j|Xk,Yk1=i x Pr(X$|Yf=j)Pr(Yf=j|Y1=i)Pr(Oh,h,yk -1\nThe posterior distribution (Eq (3)) after each observation is a mixture of products of distributions where each component has the same form as the prior distribution since Pr(Xk|Yk = j) is a mixture of Gaussians. Therefore, the number of terms in the posterior increases exponentially if we perform exact Bayesian learning. To circumvent this, we use BMM for Gaussian Mixture Models as described in (Jaini et al.]2016f Jaini & Poupart20163] The complete calculations for learning in the source domain are given in appendixB\nThe main computation in the learning and updating routine is the calculation of the sufficient set. of moments using the Bayesian posterior given in Eq. (9) in appendix[B] Let M be the number of components in the mixture model for emission distributions, N the number of hidden states and d the number of features in the data. The computational complexity for updating the parameters in the source domain learning step for each iteration is O(M2N2) for each scalar parameters and is. O(M2 N2 d3) for the parameters of the distribution over the precision matrix because that involves. a matrix multiplication step."}, {"section_index": "7", "section_name": "4.3 TARGET DOMAIN - PREDICTION", "section_text": "K K Pr(Yt=j|Yt-1=i)=)~AmPr(Ym=j|Ym1=i)= m=1 m=1 K K Pr(X|Yt=j)=xPr(X$|Yf=j)=TkJ k=1 k=1\n\\,n,Yt =j|Xt x Pr(Xt|Yt=j))`Pr(Yt=j|Yt-1=i)Pr(A,x,Yt-1 i=1 K NK Tkf($h) X \\mg(O?)Dir(\\;Y)Dir(;v) k=1 i=1 m=1 K N x C(i,j,k,m) Dir(r;v)Dir(A;) k,m i=1\nwhere f(,)g(On) are known from the source domains, xDir(;v) = aDir(;). AmDir(;y) = bmDir(;) and C(i,j, k, m) = akbmf()g(O). The exact calculations are given in Appendix[C We approximate the posterior in Eq (8) by projecting it onto a tractable family\nTransition Distribution : Each column of the N N transition matrix specifies the prob ability of making a transition from that column index to another state given by the row index. We define a Dirichlet distribution as a prior over each column of the transition matrix. Hence, =1 Dir(0|$) is the prior over Ok. Emission Distribution : Dir(wh; 3) HM=1 NW(k, A; oh, k, Wk, u.) defines a prior over a mixture of Gaussians for hidden state j with M components where the Dirichlet distribution is the prior over the mixture weights and the Normal-Wishart distribution is the prior over the mean and precision matrix of the mixture components. We take a product over j to obtain a prior over all emission distributions.\nThe goal is to predict the hidden states for a target individual (or domain) as we observe a sequence of observations. In the previous step, we learned the transition and emission distributions individ- ually for K different sources. These sources can be thought of as individuals in a population. The transition and emission distributions learned from the individual sources form a basis for the transi- tion and emission distributions of the target domain. Specifically, let the transition distribution for the kth source be denoted by g(Ok) and emission distribution be denoted by f(?) for a certain hidden state j. Then, the transition and emission distributions for the target domain is a weighted combination given by\nWe first need to compute the basis weights = (1, 2, ...., K) and = (1, 2, ...., K). We. estimate (X, ) in an unsupervised manner using BMM. We define a Dirichlet prior over X and r. i.e. Pr(X, ) = Dir(X; )Dir(; v). The posterior after observing a new data point is.\nof distributions with the same set of sufficient moments as the posterior using the Bayesian Momen Matching approach. Finally, the estimate of (, ) is the expected value of the final posterior. This completes the learning stage.\nThe transition and emission distributions for the target domain are the weighted combination of transition and emission distributions learned in the source domain respectively. The advantage of this linear combination is to account for heterogeneity in the data. The learning step in the target domain will ensure that only those source domains that resemble closely the target domain are given higher weights. This helps to bias the predictions according to the closest basis models when the population is not homogeneous.\nPredictions can be made in two different manners\nIn Fig. [1] we show the schematic for the proposed online transfer learning algorithm. The figure shows the learning phase for each source domain where the emission and transition distributions are learned using Bayesian Moment Matching technique. After learning in the source domain, we learr the weights of the basis models in the target domain for each new observation and make predictions in an online manner.\nFigure 1: Transfer Learning architecture\nAlgorithm (1) gives the complete algorithm for transfer learning by Bayesian Moment Matching\nOnline - initialize the prior over X and to be uniform. As each new data point is observed. in a sequence, a prediction is made based on the mean of the current posterior over and. r and subsequently the posterior is updated based on Eq (8).. Offline - compute the posterior of X and based on Eq (8) by using the entire sequence of. observations of the target individual. Once, the posterior is computed, predict the hidden. states for each observation in the sequence based on the mean estimates of the posterior\nData : D Learning Phase Input Xt Bayesian update Basis model for. kth source Pt(Ok,k,Yt=j|Y(t-1)=i,Xt) 0k -1)=i) Ok =Epp[Ok] Mixture of Product of. Output $k =EPDz[pk] Product of Dirichlets &Normal-Wisharts Dirichlets & Normal-Wisharts Projection Moment Matching Source Domain Target Domail A1, 1 D1 Learning Phase Model 1 22 Prediction D2 Learning Phase Model 2 Update A, given new Y=arg maxP(,n,Y= j|Xt) -- -- observation Xt Ak,TK DK Learning Phase Model K Bayesian update Xt P(2,n,Y(t-1) P(R,nt, Yt|Y(t-1), Xt) Mixture of Product Product of Dirichlets of Dirichlets Projection Moment Matching\nA11 D1 Learning Phase Model 1 A2,2 Prediction D2 Learning Phase Model 2 Update A, t given new Yt = arg maxP(a,n,Yt = j|Xt) ----- - observation Xt A k,K DK Learning Phase Model K Bayesian update Xt P(l,n, Y(t-1)) P(2,, Yt|Y(t-1),Xt) Mixture of Product. Product of Dirichlets of Dirichlets Projection Moment Matching.\nAlgorithm 1 Online Transfer Learning by Bayesian Moment Matching\nUpdate step - In this step, the hyper-parameters (y, v) over the weights (, ) are updatec. The main computation in this step is the calculation of the set of sufficient moments fror the updated Bayesian posterior given in Eq. (8). Hence, the computational complexity c the update step in the target domain for each observation is O(K2N2) where K is th. number of source domains and N is the number of hidden states.. . Prediction step - In the prediction step, a hidden label is assigned to the observation base. on the model obtained from the update step. The main computation is calculation of th. likelihood of each hidden state for the observation. The computational complexity of th. prediction step is hence O(M K N) where M is the number of components in the mixtur. model, K is the total number of source domains and N is the number of hidden states..\nThis section describes experiments on three tasks from different domains - activity recognition sleep cycle prediction among healthy individuals and patients suffering from Parkinson's disease and packet flow prediction in telecommunication networks.\nThe baseline algorithm uses Bayesian Moment Matching to learn the parameters of the HMM. Con cretely, we have data collected from several individuals (or sources) in a population for each task For transfer learning, we train an HMM with mixture of Gaussian emission distributions for each source (or individual) except the target individual. For the target individual, we estimate a posterior over the basis weights in an online and unsupervised fashion and make online predictions about the hidden states. We compare the performance of our transfer learning algorithm against the EM and baseline algorithms that treat the population as homogeneous, i.e., we train an HMM by combining the data from all the sources except the target individual. Then, using this model, we make online predictions about the hidden states of the target individual.\nWe report the results based on leave-one-out cross validation where the data of a different individua. is left out in each round. For each task, we treat every individual as a target individual once. For a fair comparison, the HMM model learned for both the baseline algorithm and the EM algorith has the same number of components as the HMM model learned by the online transfer learning algorithm.\nRegarding RNNs, we used architectures with as many input nodes as the number of attributes, one hidden layer consisting of long short term memory (LSTM) units (Hochreiter & Schmidhuber1997) and one softmax output layer with as many nodes as the number of classes. We use the categorical cross-entropy loss as the cost function. We select LSTM units instead of sigmoid or hyperbolic. tangent units due to their popularity and success in sequence learning (Sutskever et al.]2014).\nWe experimented with various architectures before we ended up with the aforementioned values,. in particular, architectures with a single hidden layer consistently performed better than multiple layers, possibly because our datasets are not very complex. We train the network by backpropagation through time (bptt) truncated to 20 time steps (Williams & Peng||1990). The RNNs are trained for a maximum number of 150 epochs, or until convergence is reached. Our implementation is based on. the Theano library (Theano Development Team2016) in Python.\nFor each task, we run experiments 10 times with each individual taken as target and the rest acting as source domains for training. We report the average percentage accuracy and use the Wilcoxon signed rank test (Wilcoxon1950) to compute a p-value and report statistical significance when the p-value is less than O.05. In the following sections, we discuss the results for each task in detail."}, {"section_index": "8", "section_name": "ACTIVITY RECOGNITION", "section_text": "As part of an on-going study to promote physical activity, we collected smartphone data with 19 participants and tested our transfer learning algorithm to recognize 5 different kinds of activities: sitting, standing, walking, running and in-a-moving-vehicle. While APIs already exist to automat- ically recognize walking, running and in-a-moving-vehicle by Android and Apple smartphones. sitting and standing are not available in the standard APIs. Furthermore, our long term goal is to obtain robust recognition algorithms for older adults and individuals with perturbed gait (e.g., due to a stroke). Labeled data was obtained by instructing the 19 participants to walk at varying speeds for 4 min, run for 2 min, stand for 2 min, sit for 2 min and ride a moving vehicle to a destination of their choice. The data collected was segmented in epocs of 1 second where 48 features (means and standard deviations of the 3D accelerometry in each epoch) were computed by the smartphone.\nFor each task, we compare our online transfer learning algorithm to EM (trained by maximum likeli- hood) and a baseline algorithm (that uses Bayesian moment matching) that both learn a single HMM with mixtures of Gaussians as emissions by treating the population as homogeneous. Furthermore. we conduct experiments using recurrent neural networks (RNNs) due to their popularity in sequence learning.\nWe perform grid search to select the best hyper-parameters for each setting. For the training method.. we either use Nesterov's accelerated gradient descent (Nesterov. 1983 Sutskever et al. 2013) with learning rates [0.001,0.01,0.1,0.2] and momentum values [0,0.2,0.4,0.6,0.8,0.9], or rmsprop (Tiele-. man & Hinton2012) having e = 10-4 and decay factor 0.9 (standard values) with learning rates [0.00005,0.0001,0.0002,0.001] and momentum values [0,0.2,0.4,0.6,0.8,0.9]. The weight decay takes values from [O.001,O.01,O.1], whereas the number of LSTM units in the hidden layer takes the possible values [2,4,6,9,12].\nThe online transfer learning algorithm learned an HMM over 18 individuals which acted as basis models for prediction on the 19th individual. In this manner, we ran experiments for each individual. 10 times to get a statistical measure of the results.\nTable 1: Average percentage accuracy of prediction for activity recognition on 19 different individ- uals. The best results among the Baseline, the EM algorithm, RNN and Transfer Learning algorithm are highlighted in bold font. 1(or D) indicates that Transfer Learning has significantly better (or worse) accuracy than the the best algorithm among the baseline, EM and RNN under the Wilcoxon signed rank test with p-value < 0.05.\nTARGET DOMAIN BASELINE EM RNN TRANSFER LEARNING PERSON 1 91.29 83.57 71.15 88.36 PERSON 2 81.37 79.87 79.58 87.65 PERSON 3 74.68 75.91 69.56 93.15 PERSON 4 73.39 68.29 74.25 84.70 PERSON 5 95.94 89.59 95.36 99.75 PERSON 6 73.98 69.77 61.71 96.43 PERSON 7 57.62 55.15 69.22 70.75 PERSON 8 91.72 86.05 74.49 97.80 PERSON 9 81.19 78.88 78.72 88.75 PERSOn 10 99.12 93.60 92.00 97.35 PERSON 11 76.59 74.67 84.75 88.75 PERSOn 12 55.36 59.71 53.63 95.05 PERSOn 13 79.66 73.46 65.54 97.60 PERSON 14 92.06 89.11 63.59 93.12 PERSON 15 79.25 72.24 91.08 94.20 PERSON 16 84.08 79.23 74.74 83.51 PERSOn 17 93.95 91.03 81.25 97.60 PERSOn 18 82.84 74.88 79.45 87.20 PERSOn 19 95.97 89.06 95.88 95.06\nTable (1) compares the average percentage accuracy of prediction for activity recognition with 19 different individuals. It demonstrates that the transfer learning algorithm performed better than the baseline on 15 individuals and in other cases its accuracy was close to the baseline. Furthermore.. it is also worth noting that in most cases, the confusion in the algorithm's prediction was betweer the following pairs of classes: In a Moving Vehicle-Standing and In a Moving Vehicle-Sitting. This is expected because in most cases the person was either standing/sitting in a bus or sitting ir a car. Table (1) also demonstrates the superior performance of online transfer learning algorithm as compared to the EM algorithm. Finally, note the poor performance of RNNs despite the fac that we fine-tuned the architecture to get the best results. RNNs are in theory very expressive. However, they are also notoriously difficult to train and fine-tune due to their non-convexity anc vanishing/exploding gradient issues that arise in backpropagation through time. Indeed, in severa. cases they even underperform all other methods."}, {"section_index": "9", "section_name": "SLEEP STAGE CLASSIFICATION", "section_text": "Sleep disruption can lead to various health issues. Understanding and analyzing sleep patterns. therefore, has great potential to significantly improve the quality of life for both patients and healthy individuals. In both clinical and research settings, the standard tool for quantifying sleep architecture and physiology is polysomnography (PSG), which is the measurement of electroencephalography. (EEG), electrooculography (EOG), electromyography (EMG), electrocardiography (ECG), and res-. piratory function of an individual during sleep. The analysis of sleep architecture is of relevance for the diagnosis of several neurological disorders, e.g., Parkinson's disease (Peeraully et al.2012). because neurological anomalies often also reflect in variations of a patient's sleep patterns..\nTypically, PSG data is divided into 30-second epochs and classified into 5 stages of sleep - wake. (W), rapid eye movement sleep (REM) or one of 3 non-REM sleep stages (N1, N2, and N3) based on the visual identification of specific signal features on the EEG, EOG, and EMG channels Epochs that cannot be distinctly sorted into one of the 5 stages are labeled as Unknown. While it. is a valuable clinical and research tool, visual classification of EEG data remains time consuming..\nrequiring up to 2 hours for a highly trained technologist to classify all the epochs within a typical 7-. hour PSG recording. Beyond that, inter-scorer agreement rates remain low around 80 (Rosenberg & Van Hout|2013). High annotation costs and low inter-scorer agreement rates have motivated efforts to develop fully automated approaches for sleep stage classification (Anderer et al.]2005, Jensen et al.] 2010] Mal]2013] Punjabi et al.]2015). However, many of these methods result in generic cross-patient classifiers that fail to reach levels of accuracy and reliability high enough to be adopted in real-world medical settings.\nThe polysomnograms (PsGs) we used for our evaluation were obtained at a clinical neurophys-. iology laboratory in Toronto (name anonymized) according to the American Academy of Sleep Medicine guidelines using a Grael HD PSG amplifier (Compumedics, Victoria, Australia). We se. lected recordings from 142 patients obtained between 2009 and 2015. Out of these 142 recordings. 91 were from healthy subjects and 51 were from patients with Parkinson's disease..\nEach recording was manually scored by a single registered PSG technologist. Recordings were first segmented into fixed-sized windows of 30 second epochs. To reduce complexity and processing. time in the feature extraction and manual labeling step, we only retained EEG channel C4-A1,. which is deemed especially important for sleep stage classification (Sil2007). Channel selection. and segmentation resulted in a ground truth data set where each instance was represented by a single-. channel time series of 7680 floating point numbers corresponding to 30 seconds of C4-A1, sampled. at 256 Hz. A vector of 26 scalar features was extracted from each epoch.Bao et al.(2011) and. Motamedi-Fakhr et al.(2014) give a detailed listing and explanation of all 26 features..\nThe online transfer learning algorithm learned an HMM over 50 individuals chosen at random which. acted as basis models for prediction on the target individual. We did not use all 140 individuals for the basis models because it resulted in sources getting sparse weights diluting the effect of. heterogeneity. We completed the experiments for each individual 10 times in this manner to get. a statistical measure of the results.\nFig. (2) shows the scatter plots of accuracy for our online transfer learning technique and the three baseline algorithms - BMM, EM (maximum likelihood) and RNNs - which treat the data as homoge neous for the sleep stage classification dataset. For each plot, a point above the dotted line indicates higher accuracy of online transfer learning technique as compared to the corresponding baseline al- gorithm for the target patient. The plots show consistent superior performance of our online transfer learning technique as compared to both baseline algorithms - BMM and EM for all target patients The online transfer learning technique also performs better on a majority of patients (102 out of 142) as compared to an optimized RNN.\nScatter Plot of accuracy for Sleep Stage Classification 100 : 100 100 80 80 80 60 60 60 40 40 40 20 20 20 0 0 0 0 20 40 60 80 100 0 20 40 60 80 100 0 20 40 60 80 100 BMM EM (max. likelihood) RNN\nassmlcation 100 100 100 80 80 80 60 60 60 40 40 40 20 20 20 0 0 0 0 20 40 60 80 100 0 20 40 60 80 100 0 20 40 60 80 100 BMM EM (max. likelihood) RNN\nFigure 2: Performance comparison of online transfer learning algorithm with three different baselin algorithms - BMM, EM (max. likelihood) and RNNs on Sleep Stage Classification data using scatte. plots of accuracy.\nAll the results are statistically significant under the Wilcoxon signed rank test with p-value < 0.0. More detailed results for comparison of the online transfer learning technique with the three baselin algorithms is given in appendix (D)."}, {"section_index": "10", "section_name": "FLOW DIRECTION PREDICTION", "section_text": "Accurate prediction of future traffic plays an important role in proactive network control. Proactive. network control means that if we know the future traffic (including directions and traffic volume) then we have more time to find a better policy for the network routing, priority scheduling as well as. rate control in order to maximize network throughput while minimizing transmission delay, packe. loss rate, etc.\nBetter understanding the behavior of TCP connections in certain applications can provide importan input to automatic application type detection, especially in those scenarios where network traffic is encrypted and DPI (Deep Packet Inspection) is nearly impossible. Different applications can b distinguished by the distinct behavior of their TCP connections, which are well described by the corresponding HMMs.\nWe performed our experiments with a publicly available dataset of real traffic from academic build-. ings. The dataset consists of packet traces with TCP flows. For our experiments, we only consider. three packet sizes and flow size as the features. The hidden labels are the source of generation of the. packet, i.e., Server or Client. We divided the dataset into 9 domains with each domain consisting of. a number of observation sequences. For the online transfer learning algorithm, we learned an HMM. for each of 8 sources that acted as basis models for prediction on the 9th source. We compared the. performance of the online transfer learning algorithm with EM and the baseline algorithm which. treat the data as homogeneous. Table 2|reports the average (of 10 experimental runs) percentage accuracy for each source. The online transfer learning algorithm performs better than both the base- line and the EM algorithm. The results are statistically significant under the Wilcoxon signed rank. test with p-value < 0.05. Furthermore, we compare our method to RNNs. It turns out that for the. task of traffic direction prediction, RNNs can actually perform well, unlike for instance the activity. recognition dataset. The better performance this time may be due to the simpler structure of the. data that consists of a single attribute and a binary class. This is in sharp contrast to the activity. recognition dataset whose instances contain 48 attributes and can belong to 5 classes, and is thus harder to train.\nTable 2: Average percentage accuracy of prediction for flow direction prediction for 9 differen domains. The best results among the Baseline, the EM algorithm, RNN and the Transfer Learning algorithm are highlighted in bold font. (or ) indicates that transfer learning has significantly bette (or worse) accuracy than the best technique among the baseline algorithm, EM and RNN unde: Wilcoxon signed rank test with pvalue < 0.05.\nTARGET DOMAIN BASELINE EM RNN TRANSFER LEARNING SOURCE 1 72.00 54.90 80.00 71.02 SOURCE 2 85.33 89.10 65.30 86.50 SOURCE 3 80.33 81.90 86.50 83.33 SOURCE 4 86.50 75.80 86.60 87.17 SOURCE 5 87.33 82.80 81.70 86.00 SOURCE 6 93.33 78.20 88.90 93.50 SOURCE 7 95.17 90.70 93.50 95.33 SOURCE 8 89.83 91.14 91.00 91.63 SOURCE 9 76.67 75.68 81.98 78.83\nIn many applications, data is produced by a population of individuals that exhibit a certain degree of variability. Traditionally, machine learning techniques ignore this variability and train a single mode. under the assumption that the population is homogeneous. While several offline transfer learning techniques have already been proposed to account for population heterogeneity, this work describes the first online transfer learning technique (to our knowledge) that incrementally determines which source models best explain a streaming sequence of observations while predicting the correspond ing hidden states. We achieved this by adapting the online Bayesian moment matching algorithm originally developed for mixture models to hidden Markov models. Experimental results confirm\nIn the future, this work could be extended in several directions. Since it is not always clear how many basis models should be used and that the observation sequences of target individuals can nec essarily be explained by a weighted combination of basis models, it would be interesting to explore techniques that can automatically determine a good number of basis models and that can generate new basis models on the fly when existing ones are insufficient. Furthermore, since recurrent neural networks (RNNs) have been shown to outperform HMMs with GMM emission distributions in some applications such as speech recognition (Graves et al.]2013), it would be interesting to generalize our online transfer learning technique to RNNs."}, {"section_index": "11", "section_name": "ACKNOWLEDGMENTS", "section_text": "This work was funded by grants from the Network for Aging Research at the University of Waterloo. the PROPEL Centre for Population Health Impact at the University of Waterloo, Huawei Noah's Ark Laboratory in Hong Kong, CIHR (CPG-140200) and NSERC (CHRP 478468-15)."}, {"section_index": "12", "section_name": "REFERENCES", "section_text": "Performance of an Automated Polysomnography Scoring System Versus Computer-assisted Manua Scoring. Sleep, 36(4):573-582, apr 2013. 1SSN 1550-9109. doi: 10.5665/sleep.2548\nRita Chattopadhyay, Narayanan Chatapuram Krishnan, and Sethuraman Panchanathan. Topology preserving domain adaptation for addressing subject based variability in semg signal. In AAA. Spring Symposium: Computational Physiology, pp. 4-9, 2011.\nHai Leong Chieu, Wee Sun Lee, and Leslie P Kaelbling. Activity recognition from physiologica data using conditional random fields. 2006.\nAlex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recur rent neural networks. In 2013 IEEE international conference on acoustics, speech and signal. processing, pp. 6645-6649. IEEE, 2013.\neter Anderer, Georg Gruber, Silvia Parapatics, Michael Woertz, Tatiana Miazhynskaia, Gerharc Klosch, Bernd Saletu, Josef Zeitlhofer, Manuel J Barbanoj, Heidi Danker-Hopfe, Sari-Leena Himanen, Bob Kemp, Thomas Penzel, Michael Grozinger, Dieter Kunz, Peter Rappelsberger Alois Schlogl, and Georg Dorffner. An E-health Solution for Automatic Sleep Classificatior According to Rechtschaffen and Kales: Validation Study of the Somnolyzer 24 x 7 Utiliz ing the Siesta Database. Neuropsychobiology, 51(3):115-133, 2005. ISSN 0302-282X. doi: 10.1159/000085205.\nDiane Cook, Kyle D Feuz, and Narayanan C Krishnan. Transfer learning for activity recognition: A survey. Knowledge and information s stems. 36(3):537-556. 2013\nWenyuan Dai, Qiang Yang, Gui-Rong Xue, and Yong Yu. Boosting for transfer learning. In Pro ceedinos ot the?4th intern atiOnalcontor nMaenin 193-200ACM2007\nPriyank Jaini and Pascal Poupart. Online and distributed learning of gaussian mixture models l bayesian moment matching. arXiv preprint arXiv:1609.05881, 2016.\nYurii Nesterov. A method of solving a convex programming problem with convergence rate O(1/sqr(k)). Soviet Mathematics Doklady, 27:372-376, 1983.\nFarheen Omar. Online bayesian learning in probabilistic graphical models using moment matching with applications. 2016.\nSinno Jialin Pan and Qiang Yang. A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10):1345-1359, 2010.\nNaresh M Punjabi, Naima Shifa, Georg Dorffner, Susheel Patil, Grace Pien, and Rashmi N Au. rora. Computer-Assisted Automated Scoring of Polysomnograms Using the Somnolyzer System Sleep, 38(10):1555-1566, 2015. 1SSN 1550-9109. doi: 10.5665/sleep.5046.\nParisa Rashidi and Diane J Cook. Transferring learned activities in smart environments. In Intelli gent Environments, pp. 185-192, 2009.\nAbdullah Rashwan, Han Zhao, and Pascal Poupart. Online and distributed bayesian moment match. ing for sum-product networks. In International Conference on Artificial Intelligence and Statistics (AISTATS), pp. 1727-1735, 2016.\nMasa-Aki Sato. Online model selection based on the variational bayes. Neural Computation, 13(7) 1649-1681, 2001.\nLing Shao, Fan Zhu, and Xuelong Li. Transfer learning for visual categorization: A survey. IEEE transactions on neural networks and learning systems, 26(5):1019-1034. 2015.\nVei-Shou Hsu and Pascal Poupart. Online bayesian moment matching for topic modeling witl unknown number of topics. In Advances In Neural Information Processing Systems, 2016. 2016\nShayan Motamedi-Fakhr, Mohamed Moshrefi-Torbati, Martyn Hill, Catherine M Hill, and Paul R White. Signal Processing Techniques Applied to Human Sleep EEG Signals - A Review. Biomedical Signal Processing and Control, 10:21-33, mar 2014. ISsN 17468094. doi: 10.1016/j.bspc.2013.12.003.\nTasneem Peeraully, Ming-Hui Yong, Sudhansu Chokroverty, and Eng-King Tan. Sleep and Parkin son's disease: A review of case-control polysomnography studies. Movement Disorders, 27(14): 1729-1737. dec 2012. 1SSN 08853185. doi: 10.1002/mds.25197.\nMatthew E Taylor and Peter Stone. Transfer learning for reinforcement learning domains: A surve. Journal of Machine Learning Research, 10(Jul):1633-1685, 2009.\nChong Wang, John William Paisley, and David M Blei. Online variational inference for the hierar chical dirichlet process. In A1STATS, volume 2, pp. 4, 2011.\nFrank Wilcoxon. Some rapid approximate statistical procedures. Annals of the New York Academy of Sciences, pp. 808-814, 1950.\nTheano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints, abs/1605.02688, 2016.\nR.J. Williams and J. Peng. An efficient gradient-based algorithm for online training of recurrent network trajectories. Neural Computation, 2(4):490-501. 1990"}, {"section_index": "13", "section_name": "NORMAL-WISHART AND DIRICHLET DISTRIBUTION", "section_text": "The Dirichlet distribution is a family of multivariate continuous probability distributions over the interval [0,1]. It is the conjugate prior probability distribution for the multinomial distribution. We. next show how the combining happens for a Dirichlet as has been highlighted in (3).\nro WmDir(w, a) = Wm ai W HT(ai) 2 T;Qi WmDir(w, Q) Xm+1 Qi W H, T(ai) m 2 ifm Qm WmDir(w,x) Dir(w;&) Y. if i #m if :\nLet be a d-dimensional vector and A be a symmetric positive definite d d matrix of random vari. ables respectively. Then, a Normal-Wishart distribution over (, A) given parameters (o, , W, v). is such that ~ Na(; o, (A)-1) where > 0 is real, o E Rd and A has a Wishart distribution. given as A ~ W(A; W, v) where W E Rdd is a positive definite matrix and v > d - 1 is real.. The marginal distribution of is a multivariate t-distribution i.e |A ~ ty-d+1 (; o, (v-d+1)) W A Normal-Wishart distribution multiplies with a Gaussian with the same mean and precision matrix to give a new Normal-Wishart distribution.\nNa(y; , (kA AV K.W.v W\nKo + y * K+1 v*=v+1 K W* = W + o-y)(o-y)\nIn this section we show the system of equations using which the parameters of a product of Dirichlet and Normal-Wishart distribution can be estimated once the set of sufficient moments are known The set of sufficient moments in this case is S = {, , F, Aj, A3km, Wj, w3} | Vj E 1, 2, .., M} where A?rm, is the (k, m)th element of the matrix A;.The expressions for the sufficient moments are :\n(Qi)(Qi+1) Oi 3Qj) (1+ Qj Var(Aij) =v(W?,+WuWjj) E[A] = vW; K+1 E[] = 8; E[())1= K(v\nThe Normal-Wishart distribution is a conjugate prior of a multivariate Gaussian distribution with. unknown mean and precision matrix (Degroot||1970). It is the combination of a Wishart distribution over the precision matrix and Gaussian distribution over the mean given the precision matrix.\nThe parameters of the approximate posterior P can be computed using the equations above in the following manner\nWi Qi = E[wi] : E[w?] - E[w;]2 8 =E[] Var(Aii) Wii = E[Ai] Var(Aij) Wij = E[Aij] E[A] V W - d-1)E[(-)-)TDW K =\nThe update equation at each time step for a source domain k is\nThe posterior after inserting all the relevant terms can be written as\nV N O,Yk=j|X,Yk1= k)0k Dir(0}|ak > w: N(X) m i=1 N M I Dir(w}; B5) I NW(hu j E{1,2,..., j=1 u=1\nVm E {1,2,...,\nThe exact moments can be calculated by\nO,,Yk=j|X,Yk-1=i)d(O)d() Vz E\nOnce we know the moments, we can use these moments to estimate the parameters of the approxi mate distribution using ideas discussed in (3).\nEmission distribution Transition Probability Prior for t Pr(O,b,YF =j|Xf,Y-1=i) x Pr(Xf|Yf =j)Pr(Yf=j|Y1=i)Pr(Oh,gh,Yf1 =i|X:t-1\nMNMN PrO,,Yk=j[Xk,Yk1=i I I IC(i,j,k,m)|Dir(0k|&;)Dir(0|ak m=1ufiufmi#j Dir(w}; B)Dir(w; Bt)]NW(hm, Ahm; om,Rkm] NW(; , A ; ok.\nN X,,Yt =j|Xt xPr(Xt|Yt=j))`Pr(Yt=j|Yt-1=i)Pr(A,n,Y i=1 K M NK N(ju,E;u) Am0 Dir(A;y)Dir(r;v) T k k=1 u=1 i=1 m=1 K N M X TkDir(;v) AmDir(X;Y) k,m i=1 u=1 combines combines known KN C(j,k,m) Dir(r;d)Dir(X;r) k,m i=1\nand N is the number of hidden classes\nIm AmDir(,Y) Dir(;)\nif i #m if i = m i+1\nM TT k C(i, j, k, m) = m N( An u=1\nS={i,,i,|ViE{1,2,.,K}}\nK N 1 \\nC(i, j, k, m)Dir(;D)Dir(X;)d()d( Z k,m i=1 K N 1 \\nC(i,j, k,m)Dir(X;)d(\\) Z k,m i=1 K N 1 C(i,j,k,m) Z k,m i=1\nSimilarly. the second moment can be evaluated as\nPr(X, ) = Dir(X;Y)Dir(;v\nwhere y and v are the hyper-parameters for the Dirichlet distribution. The posterior after each observation is\nN ,Yt=j|Xt )xPr(Xt|Yt=j))`Pr(Yt=j|Yt-1=i)Pr(A,n,Yt-1) (10) i=1 K M N K x Tk N(ju,ju) Am0 Dir(A;y)Dir(r;v) (11) k=1 u=1 i=1 m=1 K N M TkDir(r;v) AmDir(X;y)N(ju,Eku (12) X H k,m i=1 u=1 combines combines known KN 1 C(j,k,m) Dir(;o)Dir(A;1) (13) 7 k,m i=1\nNow, we can use the Bayesian Moment Matching algorithm to approximate Eq (8) as a product of two Dirichlets, in the same form as the prior. This posterior will then act as the prior for the next time step. Finally, the values of the weights will be the expected value of each Dirichlet. Let us next see how the combining happens for a Dirichlet.\nK N 1 \\2C(i,j, k,m)Dir(;D)Dir(;)d()d( Z k,m i=1 K N An(n +1) 1 C(i,j,k,m) Z CuAu)(1+ Ca k,m i=1\nK N 1 n+1 C(i,j,k,m) Z Aw)(1+a k,m i=1"}, {"section_index": "14", "section_name": "EXPERIMENT RESULTS : SLEEP STAGE CLASSIFICATION", "section_text": "Fig.3] 4|and 5|compare the performance of the online transfer learning algorithm with the baselin algorithm, the EM algorithm and recurrent neural networks (RNNs) respectively..\nSleep Stage Classification for 142 patients Difference in Accuracy for each Patient ID 100 80 60 90 40 8 O BC X + 20 X 70 0 20 + 0 50 100 150 ++ : Patient ID Sorted Difference in Accuracy 80 AAeeeree) 40 60 40 30 ++ 20 + 20 online Transfer Learning. + Baseline 20 50 100 150 0 50 100 150 Patient ID Patient ID (sorted w.r.t difference in accuracy ) (a) Percentage accuracy (b) Accuracy difference.\nSleep Stage Classification for 142 patients Difference in Accuracy for each Patient ID 100 80 AAeenreey 60 0 O 90 + 40 8 O 80 ++++ + 20 + + 14 + + + 70 + + 0 Q +4 + + X + -20 60 + Aoenrey # + 0 50 100 150 + + ++ + + Patient ID + + 50 ++ Sorted Difference in Accuracy + 80 40 ++ + 60 40 ++ + 20 20 + nline Transfer Learning 0 + 10 20 50 100 150 0 50 100 150 Patient ID Patient ID (sorted w.r.t difference in accuracy )\nFigure 3: Performance comparison of online transfer learning algorithm and baseline for the task of sleep stage classification.\nSleep Stage Classification accuracy for 143 patients Difference in accuracy for each patient ID 100 100 80 O 90 O O O 60 OO O OO C O 80 O 40 OC Oo O O OOO 70 8080% 20 C O C OO 80 00 OO O 60 + 0b 0 50 100 150 50 + Patient ID ++ Sorted Difference in Accuracy 40 100 80 30 + 60 20 40 + + + + 10 + 20 + 0 50 100 150 0 50 100 150 Patient ID Patient ID (sorted w.r.t difference in accuracy) (a) Percentage accuracy (b) Accuracy difference.\nSleep Stage Classification accuracy for 143 patients Difference in accuracy for each patient ID 100 100 O 90 6 oO O OO O O 80 O O OC oOC OO 8 70 + % 08 88 o O OC 60 ++ 0 00 oAnnee% 0 50 100 XX 150 50 Patient ID +. Sorted Difference in Accuracy 40 + 100 + + 30 + + ++ 20 + + + + + 10 H + + 50 100 150 0 50 100 150 Patient ID Patient ID (sorted w.r.t difference in accuracy)\nFigure 4: Performance comparison of online transfer learning algorithm and EM algorithm for the task of sleep stage classification.\nFig.3acompares the average percentage accuracy for our online transfer learning technique and the baseline algorithm and Fig.4a|compares EM and online transfer learning. The blue + signs represent the accuracy of the baseline algorithm and the red o represent the accuracy of the online transfer learning algorithm. The black line is a reference line that passes through the points plotting the accuracy of the online transfer Learning algorithm. The accuracy is plotted against each individua patient. The blue + signs are always below the black line indicating superior performance of the transfer learning algorithm. Fig.3b|and4b|plot the difference between the accuracy of the baseline algorithm and the transfer learning algorithm. In the top plot, the difference in accuracy is for each\nWe evaluate the moments using the equations above Vz E S Once we have the two moments, we. can project the posterior into a family of Dirichlet distributions having the same moments. In this way we can perform the learning of the parameters for the target domain..\npatient corresponding to those shown in Fig.3a and4a In the bottom plot, the difference in accura. is plotted after sorting. A reference line of 0 is also plotted for the case when there is no differen. in performance. The plots suggest that for a majority of patients the transfer learning techniq outperforms both the baseline algorithm and EM..\nComparison b/w RNN and Transfer Learning 100 Difference in accuracy for each patient ID 60 + 90 40 O O O O O O O O 20 00%0 OO 80 OOO QD Direeee O + 00 C 70 20 O -40 60 0 50 100 150 Patient ID Sorted difference in accuracy 50 + 60 40 40 + 20 30 0 + 20 20 -40 0 50 100 150 0 50 100 150 Patient ID Patient ID (sorted w.r.t difference in accuracy) (a) Percentage accuracy (b) Accuracy difference\n100 Difference in accuracy for each patient ID 60 + 90 40 O O O O O O 00% O O 80 088 C 70 20 AAennee% 40 6. 60 50 100 150 Patient ID + + Sorted difference in accuracy 50 60 40 40 + 20 0 30 + 20 20 50 100 40 0 150 0 50 100 150 Patient ID Patient ID (sorted w.r.t difference in accuracy)\nIn Fig. 5a|we compare the performance of the online transfer learning algorithm with RNNs. Fig.5b plots the difference between the accuracy of RNN and the online transfer learning algorithm. In the top plot, the difference in accuracy is for each patient corresponding to those shown in Fig.5a In the bottom plot, the difference in accuracy is plotted after sorting. The figures show that the online transfer learning algorithm outperformed RNNs for a majority of patients (102 out of 142). All the results are statistically significant under the Wilcoxon signed rank test with p-value < O.05."}] |
S1di0sfgl | [{"section_index": "0", "section_name": "HIERARCHICAL MULTISCALE RECURRENT NEURAL NETWORK", "section_text": "Junyoung Chung, Sungjin Ahn & Yoshua Bengio\n[junyoung.chung, sungjin.ahn, yoshua.bengio}@umontreal.ca\nLearning both hierarchical and temporal representation has been among the long. standing challenges of recurrent neural networks. Multiscale recurrent neural. networks have been considered as a promising approach to resolve this issue, yet. there has been a lack of empirical evidence showing that this type of models can. actually capture the temporal dependencies by discovering the latent hierarchical. structure of the sequence. In this paper, we propose a novel multiscale approach. called the hierarchical multiscale recurrent neural network, that can capture the. latent hierarchical structure in the sequence by encoding the temporal dependencies with different timescales using a novel update mechanism. We show some evidence. that the proposed model can discover underlying hierarchical structure in the. sequences without using explicit boundary information. We evaluate our proposed. model on character-level language modelling and handwriting sequence generation."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "One of the key principles of learning in deep neural networks as well as in the human brain is to obtain. a hierarchical representation with increasing levels of abstraction (Bengio]2009]LeCun et al.|2015 Schmidhuber| 2015). A stack of representation layers, learned from the data in a way to optimize. the target task, make deep neural networks entertain advantages such as generalization to unseen examples (Hoffman et al.| 2013), sharing learned knowledge among multiple tasks, and discovering. disentangling factors of variation (Kingma & Welling2013). The remarkable recent successes of. the deep convolutional neural networks are particularly based on this ability to learn hierarchical. representation for spatial data (Krizhevsky et al.,2012). For modelling temporal data, the recent. resurgence of recurrent neural networks (RNN) has led to remarkable advances (Mikolov et al.2010). Graves2013] Cho et al.2014] Sutskever et al.[2014f [Vinyals et al.[ 2015). However, unlike the spatial data, learning both hierarchical and temporal representation has been among the long-standing. challenges of RNNs in spite of the fact that hierarchical multiscale structures naturally exist in many. temporal data (Schmidhuber1991] Mozer1993} El Hihi & Bengio]1995] Lin et al.1996] Koutnik et al.|2014).\nA promising approach to model such hierarchical and temporal representation is the multiscale. RNNs (Schmidhuber1992] E1 Hihi & Bengio]1995} Koutnik et al.]2014). Based on the observation that high-level abstraction changes slowly with temporal coherency while low-level abstraction. has quickly changing features sensitive to the precise local timing (El Hihi & Bengio] 1995), the. multiscale RNNs group hidden units into multiple modules of different timescales. In addition to. the fact that the architecture fits naturally to the latent hierarchical structures in many temporal data. the multiscale approach provides the following advantages that resolve some inherent problems. of standard RNNs: (a) computational efficiency obtained by updating the high-level layers less. frequently, (b) efficiently delivering long-term dependencies with fewer updates at the high-level. layers, which mitigates the vanishing gradient problem, (c) flexible resource allocation (e.g., more. hidden units to the higher layers that focus on modelling long-term dependencies and less hidden. units to the lower layers which are in charge of learning short-term dependencies). In addition, the. learned latent hierarchical structures can provide useful information to other downstream tasks such"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "as module structures in computer program learning, sub-task structures in hierarchical reinforcemen learning, and story segments in video understanding.\nThere have been various approaches to implementing the multiscale RNNs. The most popula. approach is to set the timescales as hyperparameters (El Hihi & Bengio]1995] Koutnik et al.]2014 Bahdanau et al.[2016) instead of treating them as dynamic variables that can be learned from the. data (Schmidhuber1991;1992] Chung et al.]2015] 2016). However, considering the fact tha non-stationarity is prevalent in temporal data, and that many entities of abstraction such as word. and sentences are in variable length, we claim that it is important for an RNN to dynamically adap its timescales to the particulars of the input entities of various length. While this is trivial if the. hierarchical boundary structure is provided (Sordoni et al.]2015), it has been a challenge for an RNN. to discover the latent hierarchical structure in temporal data without explicit boundary information.\nIn this paper, we propose a novel multiscale RNN model, which can learn the hierarchical multiscale. structure from temporal data without explicit boundary information. This model, called a hierarchical multiscale recurrent neural network (HM-RNN), does not assign fixed update rates, but adaptively determines proper update times corresponding to different abstraction levels of the layers. We find. that this model tends to learn fine timescales for low-level layers and coarse timescales for high-level layers. To do this, we introduce a binary boundary detector at each layer. The boundary detector is turned on only at the time steps where a segment of the corresponding abstraction level is completely processed. Otherwise, i.e., during the within segment processing, it stays turned off. Using the hierarchical boundary states, we implement three operations, UPDATE, COPY and FLUSH, and. choose one of them at each time step. The UPDATE operation is similar to the usual update rule of the long short-term memory (LSTM) (Hochreiter & Schmidhuber!|1997), except that it is executed sparsely according to the detected boundaries. The COPY operation simply copies the cell and hidden states of the previous time step. Unlike the leaky integration of the LSTM or the Gated Recurrent Unit (GRU) (Cho et al.]2014), the COPY operation retains the whole states without any loss of. information. The FLUSH operation is executed when a boundary is detected, where it first ejects the summarized representation of the current segment to the upper layer and then reinitializes the states to start processing the next segment. Learning to select a proper operation at each time step and tc detect the boundaries, the HM-RNN discovers the latent hierarchical structure of the sequences. We find that the straight-through estimator (Hinton]2012] Bengio et al.]2013Courbariaux et al.]2016 is efficient for training this model containing discrete variables..\nWe evaluate our model on two tasks: character-level language modelling and handwriting sequence. generation. For the character-level language modelling, the HM-RNN achieves the state-of-the-art. results on the Text8 dataset, and comparable results to the state-of-the-art on the Penn Treebank and Hutter Prize Wikipedia datasets. The HM-RNN also outperforms the standard RNN on the. handwriting sequence generation using the IAM-OnDB dataset. In addition, we demonstrate that the. hierarchical structure found by the HM-RNN is indeed very similar to the intrinsic structure observed in the data. The contributions of this paper are:."}, {"section_index": "3", "section_name": "2 RELATED WORK", "section_text": "Two notable early attempts inspiring our model are[Schmidhuber (1992) and El Hihi & Bengio(1995 In these works, it is advocated to stack multiple layers of RNNs in a decreasing order of update frequency for computational and learning efficiency. In Schmidhuber (1992), the author shows a model that can self-organize a hierarchical multiscale structure. Particularly in El Hihi & Bengio (1995), the advantages of incorporating a priori knowledge, \"temporal dependencies are structured hierarchically\", into the RNN architecture is studied. The authors propose an RNN architecture that. updates each layer with a fixed but different rate, called a hierarchical RNN..\nWe propose for the first time an RNN model that can learn a latent hierarchical structure of a sequence without using explicit boundary information.. We show that it is beneficial to utilize the above structure through empirical evaluation. We show that the straight-through estimator is an efficient way of training a model containing. discrete variables. We propose the slope annealing trick to improve the training procedure based on the. straight-through estimator.\nLSTMs (Hochreiter & Schmidhuber1997) employ the multiscale update concept, where the hidde1. units have different forget and update rates and thus can operate with different timescales. Howeve unlike our model, these timescales are not organized hierarchically. Although the LSTM has a self. loop for the gradients that helps to capture the long-term dependencies by mitigating the vanishing. gradient problem, in practice, it is still limited to a few hundred time steps due to the leaky integratio. by which the contents to memorize for a long-term is gradually diluted at every time step. Also, th. model remains computationally expensive because it has to perform the update at every time step. for each unit. However, our model is less prone to these problems because it learns a hierarchica. structure such that, by design, high-level layers learn to perform less frequent updates than low-leve. layers. We hypothesize that this property mitigates the vanishing gradient problem more efficiently. while also being computationally more efficient..\nA more recent model, the clockwork RNN (CW-RNN) (Koutnik et al.]2014) extends the hierarchica RNN (El Hihi & Bengio]1995) and the NARX RNN (Lin et al.]1996] The CW-RNN tries to solve the issue of using soft timescales in the LSTM, by explicitly assigning hard timescales. In the. CW-RNN, hidden units are partitioned into several modules, and different timescales are assigned to. the modules such that a module i updates its hidden units at every 2(i-1)-th time step. The Cw-RNN. is computationally more efficient than the standard RNN including the LSTM since hidden units. are updated only at the assigned clock rates. However, finding proper timescales in the CW-RNN. remains as a challenge whereas our model learns the intrinsic timescales from the data. In the biscale RNNs (Chung et al.2016), the authors proposed to model layer-wise timescales adaptively by having. additional gating units, however this approach still relies on the soft gating mechanism like LSTMs.\nOther forms of Hierarchical RNN (HRNN) architectures have been proposed in the cases where the explicit hierarchical boundary structure is provided. In Ling et al.(2015), after obtaining the word boundary via tokenization, the HRNN architecture is used for neural machine translation by. modelling the characters and words using the first and second RNN layers, respectively. A similar HRNN architecture is also adopted in Sordoni et al. (2015) to model dialogue utterances. However,. in many cases, hierarchical boundary information is not explicitly observed or expensive to obtain Also, it is unclear how to deploy more layers than the number of boundary levels that is explicitly. observed in the data.\nWhile the above models focus on online prediction problems, where a prediction needs to be made by. using only the past data, in some cases, predictions are made after observing the whole sequence. In this setting, the input sequence can be regarded as 1-D spatial data, convolutional neural networks. with 1-D kernels are proposed in Kim (2014) and Kim et al.(2015) for language modelling and sentence classification. Also, in Chan et al.(2016) and Bahdanau et al.(2016), the authors proposed to obtain high-level representation of the sequences of reduced length by repeatedly merging or. pooling the lower-level representation of the sequences.\nHierarchical RNN architectures have also been used to discover the segmentation structure in sequences (Fernandez et al.]2007) Kong et al.]2015). It is however different to our model in the sense that they optimize the objective with explicit labels on the hierarchical segments while our model discovers the intrinsic structure only from the sequences without segment label information.\nThe COPY operation used in our model can be related to Zoneout (Krueger et al.] 2016) whicl is a recurrent generalization of stochastic depth (Huang et al.]2016). In Zoneout, an identit transformation is randomly applied to each hidden unit at each time step according to a Bernoull distribution. This results in occasional copy operations of the previous hidden states. While the focu of Zoneout is to propose a regularization technique similar to dropout (Srivastava et al.|2014) (wher the regularization strength is controlled by a hyperparameter), our model learns (a) to dynamicall determine when to copy from the context inputs and (b) to discover the hierarchical multiscale structure and representation. Although the main goal of our proposed model is not regularization, we found that our model also shows very good generalization performance.\nword word word word phrase phrase (a) (b)\nFigure 1: (a) The HRNN architecture, which requires the knowledge of the hierarchical boundaries (b) The HM-RNN architecture that discovers the hierarchical multiscale structure in the data"}, {"section_index": "4", "section_name": "3.1 MOTIVATION", "section_text": "To begin with, we provide an example of how a stacked RNN can model temporal data in an idea. setting, i.e., when the hierarchy of segments is provided (Sordoni et al.2015). Ling et al.|2015). Ir Figure[1(a), we depict a hierarchical RNN (HRNN) for language modelling with two layers: the first. layer receives characters as inputs and generates word-level representations (C2w-RNN), and the second layer takes the word-level representations as inputs and yields phrase-level representations (W2P-RNN).\nAs shown, by means of the provided end-of-word labels, the C2w-RNN obtains word-level represen. tation after processing the last character of each word and passes the word-level representation to the W2P-RNN. Then, the W2P-RNN performs an update of the phrase-level representation. Note that the. hidden states of the W2P-RNN remains unchanged while all the characters of a word are processed by. the C2W-RNN. When the C2W-RNN starts to process the next word, its hidden states are reinitialized. using the latest hidden states of the W2P-RNN, which contain summarized representation of all the. words that have been processed by that time step, in that phrase..\nCan an RNN discover such hierarchical multiscale structure without explicit hierarchical boundary. nformation? Considering the fact that the boundary information is difficult to obtain (for example consider languages where words are not always cleanly separated by spaces or punctuation symbols. and imperfect rules are used to separately perform segmentation) or usually not provided at all, this is. a legitimate problem. It gets worse when we consider higher-level concepts which we would like. the RNN to discover autonomously. In Section[2] we discussed the limitations of the existing RNN models under this setting, which either have to update all units at every time step or use fixed update. frequencies (El Hihi & Bengio]1995, Koutnik et al.]2014). Unfortunately, this kind of approach is. not well suited to the case where different segments in the hierarchical decomposition have differen. lengths: for example, different words have different lengths, so a fixed hierarchy would not update its. upper-level units in synchrony with the natural boundaries in the data.."}, {"section_index": "5", "section_name": "3.2 THE PROPOSED MODEL", "section_text": "1 The acronym NARX stands for Non-linear Auto-Regressive model with eXogenous inputs\nFrom this simple example, we can see the advantages of having a hierarchical multiscale structure: (1) as the W2P-RNN is updated at a much slower update rate than the C2W-RNN, a considerable amount. of computation can be saved, (2) gradients are backpropagated through a much smaller number of. time steps, and (3) layer-wise capacity control becomes possible (e.g., use a smaller number of hidden units in the first layer which models short-term dependencies but whose updates are invoked much. more often).\nA key element of our model is the introduction of a parametrized boundary detector, which outputs. a binary value, in each layer of a stacked RNN, and learns when a segment should end in such. a way to optimize the overall target objective. Whenever the boundary detector is turned on at a time step of layer l (i.e., when the boundary state is 1), the model considers this to be the end of a\nHM-LSTM.\nf O c_1 + i O gt if zf-1 = 0 and z I = 1(UPDAT if zf-1 = 0 and zt -1 = 0 (COPY) l t if O gt if zf-1 = 1 (FLUSH),\nif COPY, O tanh(c) otherwise\nThe COPY operation, which simply performs (c, h) (ct-1, h_1), implements the observation. that an upper layer should keep its state unchanged until it receives the summarized input from the lower layer. The UPDATE operation is performed to update the summary representation of the zf-1 is detected from the layer below but the boundary zf-1 was not found. layer l if the boundary zt at the previous time step. Hence, the UPDATE operation is executed sparsely unlike the standard RNNs where it is executed at every time step, making it computationally inefficient. If a boundary is detected, the FLUSH operation is executed. The FLUSH operation consists of two sub-operations (a) EJECT to pass the current state to the upper layer and then (b) RESET to reinitialize the state before starting to read a new segment. This operation implicitly forces the upper layer to absorb the summary information of the lower layer segment, because otherwise it will be lost. Note that the FLUSH operation is a hard reset in the sense that it completely erases all the previous states of the. same layer, which is different from the soft reset or soft forget operation in the GRU or LSTM..\nWhenever needed (depending on the chosen operation), the gate values (ff, if, of), the cell proposa are then obtained by:\nsigm i sigm recurrent( top-down( bottom-up(l) sigm 0 fslice St tanh 60 hard sigm\nHere, we use W E R(4dim(h2)+1) dim(he-1), U E R(4dim(h')+1) dim(h2) to denote state transition\ncan also be implemented as a function of ht, e.g., zt = hard sigm(Uht).\nsegment corresponding to the latent abstraction level of that layer (e.g., word or phrase) and feeds the summarized representation of the detected segment into the upper layer (l + 1). Using the boundary states, at each time step, each layer selects one of the following operations: UPDATE, COPY or FLUSH. The selection is determined by (1) the boundary state of the current time step in the layer below zf-1 and (2) the boundary state of the previous time step in the same layer zf\nIn the following, we describe an HM-RNN based on the LSTM update rule. We call this model a hierarchical multiscale LSTM (HM-LSTM). Consider an HM-LSTM model of L layers (l = .. L) which. at each laver l. performs the following update at time step t:.\nHere, (f, i, o) are forget, input, output gates, and g is a cell proposal vector. Note that unlike the LSTM, it is not necessary to compute these gates and cell proposal values at every time step. For example, in the case of the COPY operation, we do not need to compute any of these values and thus can save computations.\nrecurrent(e) top-down(e) +1 1 bottom-up(l) St\nA rl+1\nFigure 2: Left: The gating mechanism of the HM-RNN. Right: The output module when L = :\ntop-down connection is ignored, and we use h = xt. Since the input should not be omitted, we set z = 1 for all t. Also, we do not use the boundary detector for the last layer. The hard sigm is. defined by hard sigm(x) = max (0, min (1. ax+1 with a being the slope variable..\nUnlike the standard LSTM, the HM-LSTM has a top-down connection from (l + 1) to l, which is. allowed to be activated only if a boundary is detected at the previous time step of the layer l (see. Eq.[6). This makes the layer l to be initialized with more long-term information after the boundary. is detected and execute the FLUSH operation. In addition, the input from the lower layer (l - 1) becomes effective only when a boundary is detected at the current time step in the layer (l - 1) due.\nFinally. the binary boundary state t is obtained by:.\nfbound\nif zl > 0.5 otherwise,\nor sample from a Bernoulli distribution z ~ Bernoulli(z). Although this binary decision is a key to. our model, it is usually difficult to use stochastic gradient descent to train such model with discrete decisions as it is not differentiable."}, {"section_index": "6", "section_name": "3.3 COMPUTING GRADIENT OF BOUNDARY DETECTOR", "section_text": "Training neural networks with discrete variables requires more efforts since the standard backpropa. gation is no longer applicable due to the non-differentiability. Among a few methods for training a. neural network with discrete variables such as the REINFORCE (Williams1992]Mnih & Gregor 2014) and the straight-through estimator (Hinton2012 Bengio et al.2013), we use the straight through estimator to train our model. The straight-through estimator is a biased estimator because the. non-differentiable function used in the forward pass (i.e., the step function in our case) is replaced by a differentiable function during the backward pass (i.e., the hard sigmoid function in our case). The straight-through estimator, however, is much simpler and often works more efficiently in practice than other unbiased but high-variance estimators such as the REINFORCE. The straight-through. estimator has also been used in Courbariaux et al.(2016) and|Vezhnevets et al.(2016)."}, {"section_index": "7", "section_name": "4 EXPERIMENTS", "section_text": "We evaluate the proposed model on two tasks, character-level language modelling and handwriting sequence generation. Character-level language modelling is a representative example of discrete\n3 +1 9t 9t\nThe Slope Annealing Trick. In our experiment, we use the slope annealing trick to reduce the bias. of the straight-through estimator. The idea is to reduce the discrepancy between the two functions used during the forward pass and the backward pass. That is, by gradually increasing the slope a of the hard sigmoid function, we make the hard sigmoid be close to the step function. Note that starting. with a high slope value from the beginning can make the training difficult while it is more applicable later when the model parameters become more stable. In our experiments, starting from slope a = 1,. we slowly increase the slope until it reaches a threshold with an appropriate scheduling..\nTable 1: BPC on the Penn Treebank test set (left) and Hutter Prize Wikipedia test set (right). (*) This model is a variant of the HM-LSTM that does not discretize the boundary detector states. (t) These models are implemented by the authors to evaluate the performance using layer normalization (Ba et al.|2016) with the additional output module. () This method uses test error signals for predicting. the next characters, which makes it not comparable to other methods that do not..\nA sequence modelling task aims at learning the probability distribution over sequences by minimizing the negative log-likelihood of the training sequences:.\nN Tn 1 1ogp(xf|xZt min 0 N n=1 t=1\nwhere 0 is the model parameter, N is the number of training sequences, and Tn is the length of the n-th sequence. A symbol at time t of sequence n is denoted by xt, and x%t denotes all previous. symbols at time t. We evaluate our model on three benchmark text corpora: (1) Penn Treebank, (2) Text8 and (3) Hutter Prize Wikipedia. We use the bits-per-character (BPC), E[ log2 p(xt+1 x<t)] as the evaluation metric.\n= sigm(w[h;... ; hD)\nwhere wl RL-1 arameter. The output embedding h is computed by\ngtWihf hf = ReLU l=1\nPenn Treebank Hutter Prize Wikipedia. Model BPC Model BPC Norm-stabilized RNN Krueger & Memisevic 2015) 1.48 Stacked LSTM (Graves2013) 1.67 CW-RNN Koutnik et al. 2014 1.46 MRNN (Sutskever et al. 2011 1.60 HF-MRNN (Mikolov et al. 2012 1.41 GF-LSTM (Chung et al.. 2015 1.58 MI-RNN (Wu et al. 2016 1.39 Grid-LSTM (Kalchbrenner et al.|2015 1.47 ME n-gram (Mikolov et al. 2012) 1.37 MI-LSTM (Wu et al.]2016 1.44 BatchNorm LSTM 2016) 1.32 Recurrent Memory Array Structures (Rocki]2016a Cooijmans et al. 1.40 Zoneout RNN Krueger et al. 2016 1.27 SF-LSTM (Rocki|2016b)+ 1.37 HyperNetworks (Ha et al. 2016) 1.27 HyperNetworks (Ha et al.|2016) 1.35 LayerNorm HyperNetworks (Ha et al. 2016) 1.23 LayerNorm HyperNetworks (Ha et al.[2016) 1.34 LayerNorm CW-RNNi 1.40 Recurrent Highway Networks (Zilly et al.|2016 1.32 LayerNorm LSTM+ 1.29 LayerNorm LSTM 1.39 LayerNorm HM-LSTM Sampling 1.27 HM-LSTM 1.34 LayerNorm HM-LSTM Soft* 1.27 LayerNorm HM-LSTM 1.32 LayerNorm HM-LSTM Step Fn. 1.25 PAQ8hp12 (Mahoney2005] 1.32 LayerNorm HM-LSTM Step Fn. & Slope Annealing. 1.24 decomp8 (Mahoney 2009) 1.28\nsequence modelling, where the discrete symbols form a distinct hierarchical multiscale structure. The performance on real-valued sequences is tested on the handwriting sequence generation in which a relatively clear hierarchical multiscale structure exists compared to other data such as speech signals\nModel We use a model consisting of an input embedding layer, an RNN module and an output. module. The input embedding layer maps each input symbol into 128-dimensional continuous vector without using any non-linearity. The RNN module is the HM-LSTM, described in Section 3 with three layers. The output module is a feedforward neural network with two layers, an output embedding layer and a softmax layer. Figure[2|(right) shows a diagram of the output module. At each. time step, the output embedding layer receives the hidden states of the three RNN layers as input. In order to adaptively control the importance of each layer at each time step, we also introduce three scalar gating units gt E R to each of the layer outputs:.\nText8 Model BPC td-LSTM (Zhang et al.]|2016 1.63 HF-MRNN (Mikolov et aI.J|2012 1.54 MI-RNN 7Wu et al.I2016 1.52 Skipping-RNN (Pachitariu & Sahani 2013 1.48 MI-LSTM (Wu et al.[2016) 1.44 BatchNorm LSTM 7Cooijmans et al 2016 1.36 HM-LSTM 1.32 LayerNorm HM-LSTM 1.29\nTable 2: BPC on the Text8 test set\nPenn Treebank We process the Penn Treebank dataset (Marcus et al.[[1993) by following the procedure introduced in Mikolov et al. (2012). Each update is done by using a mini-batch of 64 examples of length 100 to prevent the memory overflow problem when unfolding the RNN in time for backpropagation. The last hidden state of a sequence is used to initialize the hidden state of the next sequence to approximate the full backpropagation. We train the model using Adam (Kingma & Ba] 2014) with an initial learning rate of 0.002. We divide the learning rate by a factor of 50 wher the validation negative log-likelihood stopped decreasing. The norm of the gradient is clipped with a threshold of 1 (Mikolov et al.|2010f Pascanu et al.2012). We also apply layer normalization (Ba et al.[[2016) to our models. For all of the character-level language modelling experiments, we apply the same procedure, but only change the number of hidden units, mini-batch size and the initia learning rate.\nFor the Penn Treebank dataset, we use 512 units in each layer of the HM-LSTM and for the outpu. embedding layer. In Table[1(left), we compare the test BPCs of four variants of our model to othe. baseline models. Note that the HM-LSTM using the step function for the hard boundary decisior. outperforms the others using either sampling or soft boundary decision (i.e., hard sigmoid). The tes. BPC is further improved with the slope annealing trick, which reduces the bias of the straight-througl. estimator. We increased the slope a with the following schedule a = min (5, 1 + 0.04 . Nepoch), where. Nepoch is the maximum number of epochs. The HM-LSTM achieves test BPC score of 1.24. For the. remaining tasks, we fixed the hard boundary decision using the step function without slope annealin due to the difficulty of finding a good annealing schedule on large-scale datasets..\nText8 The Text8 dataset (Mahoney2009) consists of 100M characters extracted from the Wikipedia corpus. Text8 contains only alphabets and spaces, and thus we have total 27 symbols. In order to compare with other previous works, we follow the data splits used in|Mikolov et al.[(2012). We use 1024 units for each HM-LSTM layer and 2048 units for the output embedding layer. The mini-batch size and the initial learning rate are set to 128 and 0.001, respectively. The results are shown in Table2] The HM-LSTM obtains the state-of-the-art test BPC 1.29.\nHutter Prize Wikipedia The Hutter Prize Wikipedia (enwi k8) dataset (Hutter2012) contains 205 symbols including XML markups and special characters. We follow the data splits used in|Grave. (2013) where the first 90M characters are used to train the model, the next 5M characters for validatior and the remainders for the test set. We use the same model size, mini-batch size and the initia learning rate as in the Text8. In Table[1(right), we show the HM-LSTM achieving the test BPC 1.32 which is a tie with the state-of-the-art result among the neural models. Although the neural models show remarkable performances, their compression performance is still behind the best models such as PAQ8hp12 (Mahoney2005) and decomp8 (Mahoney2009)\nVisualizing Learned Hierarchical Multiscale StructureIn Figure 3|and 4] we visualize the. boundaries detected by the boundary detectors of the HM-LSTM while reading a character sequence. of total length 270 taken from the validation set of either the Penn Treebank or Hutter Prize Wikipedia dataset. Due to the page width limit, the figure contains the sequence partitioned into three segments. of length 90. The white blocks indicate boundaries zf = 1 while the black blocks indicate the. non-boundaries z = 0.\nInterestingly in both figures, we can observe that the boundary detector of the first layer, z', tends. to be turned on when it sees a space or after it sees a space, which is a reasonable breakpoint to. separate between words. This is somewhat surprising because the model self-organizes this structure\nFigure 3: Hierarchical multiscale structure in the Wikipedia dataset captured by the boundary detectors of the HM-LSTM Penn Treebank Line 1 h |h2 h| onsumeI wan moV e t elephones the w a t c Penn Treebank Line 2 |h3| h1 hI ng abc monday nIght footbal cannow Voteduring <u nk > Ehe gr eatest pIay y e a Penn Treebank Line 3 |h3 |h2 |h|E\nFigure 4: The l2-norm of the hidden states shown together with the states of the boundary detectors of the HM-LSTM.\nwithout any explicit boundary information. In Figure[3] we observe that the z1 tends to detect the boundaries of the words but also fires within the words, where the z2 tends to fire when it sees either an end of a word or 2, 3-grams. In Figure4] we also see flushing in the middle of a word, e.g. \"'tele-FLUSH-phone\". Note that \"tele\"' is a prefix after which a various number of postfixes can follow From these, it seems that the model uses to some extent the concept of surprise to learn the boundary Although interpretation of the second layer boundaries is not as apparent as the first layer boundaries it seems to segment at reasonable semantic / syntactic boundaries, e.g., \"consumers may' - \"want to move their telephones a\" - \"little closer to the tv set <unk>\", and so on.\nAnother remarkable point is the fact that we do not pose any constraint on the number of boundaries that the model can fire up. The model, however, learns that it is more beneficial to delay the information ejection to some extent. This is somewhat counterintuitive because it might look more beneficial to feed the fresh update to the upper layers at every time step without any delay. We conjecture the reason that the model works in this way is due to the FLUSH operation that poses an Implicit constraint on the frequency of boundary detection, because it contains both a reward (feeding fresh information to upper layers) and a penalty (erasing accumulated information). The model finds an optimal balance between the reward and the penalty.\nTo understand the update mechanism more intuitively, in Figure4] we also depict the heatmap of the e2-norm of the hidden states along with the states of the boundary detectors. As we expect, we car see that there is no change in the norm value within segments due to the COPY operation. Also, th color of |h1|| changes quickly (at every time step) because there is no COPY operation in the firs layer. The color of ||h2|| changes less frequently based on the states of z and z?-1. The color of |h3|| changes even slowly, i.e., only when z? = 1.\nA notable advantage of the proposed architecture is that the internal process of the RNN becomes more interpretable. For example, we can substitute the states of z and z?-1 into Eq.2 and infer. which operation among the UPDATE, COPY and FLUSH was applied to the second layer at time step. t. We can also inspect the update frequencies of the layers simply by counting how many UPDATE. and FLUSH operations were made in each layer. For example in Figure4] we see that the first layer updates at every time step (which is 270 UPDATE operations), the second layer updates 56 times\nTable 3: Average log-likelihood per sequence on the IAM-OnDB test set.\nncn n1ac\nFigure 5: The visualization by s egments based on either the given pen-tip location or states of the z2"}, {"section_index": "8", "section_name": "4.2 HANDWRITING SEOUENCE GENERATION", "section_text": "We extend the evaluation of the HM-LSTM to a real-valued sequence modelling task using IAM OnDB (Liwicki & Bunke2005) dataset. The IAM-OnDB dataset consists of 12, 179 handwriting examples, each of which is a sequence of (x, y) coordinate and a binary indicator p for pen-tip location, giving us (x1:Tn, Y1:Tn,P1:Tn), where n is an index of a sequence. At each time step the model receives (xt, Yt, Pt), and the goal is to predict (xt+1, Yt+1, Pt+1). The pen-up (pt = 1 indicates an end of a stroke, and the pen-down (pt = O) indicates that a stroke is in progress. There is usually a large shift in the (x, y) coordinate to start a new stroke after the pen-up happens. We remove all sequences whose length is shorter than 300. This leaves us 10, 465 sequences for training 581 for validation, 582 for test. The average length of the sequences is 648. We normalize the range of the (x, y) coordinates separately with the mean and standard deviation obtained from the training set. We use the mini-batch size of 32, and the initial learning rate is set to 0.0003.\nWe use the same model architecture as used in the character-level language model, except that the output layer is modified to predict real-valued outputs. We use the mixture density network as the output layer following Graves (2013), and use 400 units for each HM-LSTM layer and for the output embedding layer. In Table[3] we compare the log-likelihood averaged over the test sequences of the IAM-OnDB dataset. We observe that the HM-LSTM outperforms the standard LSTM. The slope annealing trick further improves the test log-likelihood of the HM-LSTM into 1167 in our setting. In this experiment, we increased the slope a with the following schedule a = min (3, 1 + 0.004 . Nepoch) In Figure 5] we let the HM-LSTM to read a randomly picked validation sequence and present the visualization of handwriting examples by segments based on either the states of z2 or the states of pen-tip location3"}, {"section_index": "9", "section_name": "5 CONCLUSION", "section_text": "In this paper, we proposed the HM-RNN that can capture the latent hierarchical structure of the sequences. We introduced three types of operations to the RNN, which are the COPY, UPDATF and FLUSH operations. In order to implement these operations, we introduced a set of binary variables and a novel update rule that is dependent on the states of these binary variables. Each binary variable is learned to find segments at its level, therefore, we call this binary variable, a boundary detector. On the character-level language modelling, the HM-LSTM achieved state-of-the-art resul on the Text8 dataset and comparable results to the state-of-the-art results on the Penn Treebanl and Hutter Prize Wikipedia datasets. Also, the HM-LSTM outperformed the standard LSTM or the handwriting sequence generation. Our results and analysis suggest that the proposed HM-RNN can discover the latent hierarchical structure of the sequences and can learn efficient hierarchica multiscale representation that leads to better generalization performance.\nand only 9 updates has made in the third layer. Note that, by design, the first layer performs UPDATE operation at every time step and then the number of UPDATE operations decreases as the layer level increases. In this example, the total number of updates is 335 for the HM-LSTM which is 60% of reduction from the 810 updates of the standard RNN architecture\nTheplot functioncouldbe foundatb1og.otoro.net/2015/12/12/handwriting-generation-demo-in-tensorflow/"}, {"section_index": "10", "section_name": "ACKNOWLEDGMENTS", "section_text": "The authors would like to thank Alex Graves, Tom Schaul and Hado van Hasselt for their fruitful comments and discussion. We acknowledge the support of the following agencies for research funding and computing support: Ubisoft, Samsung, IBM, Facebook, Google, Microsoft, NSERC. Calcul Quebec, Compute Canada, the Canada Research Chairs and CIFAR. The authors thank the. developers of Theano (Team et al.]2016). JC would like to thank Arnaud Bergenon and Frederic Bastien for their technical support. JC would also like to thank Guillaume Alain, Kyle Kastner and. David Ha for providing us useful pieces of code.."}, {"section_index": "11", "section_name": "REFERENCES", "section_text": "Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv: 1607.06450 2016.\nYoshua Bengio, Nicholas Leonard, and Aaron Courville. Estimating or propagating gradients through stochasti neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013\nKyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the Empiricial Methods in Natural Language Processing (EMNLP 2014), October 2014.\nJunyoung Chung, Kyunghyun Cho, and Yoshua Bengio. A character-level decoder without explicit segmentation for neural machine translation. Association for Computational Linguistics (ACL), 2016..\nTim Cooijmans, Nicolas Ballas, Cesar Laurent, and Aaron Courville. Recurrent batch normalization. arXi preprint arXiv:1603.09025, 2016.\nMatthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neura networks: Training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv preprini arXiv:1602.02830, 2016.\nSalah El Hihi and Yoshua Bengio. Hierarchical recurrent neural networks for long-term dependencies. Ii Advances in Neural Information Processing Systems, pp. 493-499. Citeseer, 1995.\nSantiago Fernandez, Alex Graves, and Jurgen Schmidhuber. Sequence labelling in structured domains with hierarchical recurrent neural networks. In Proceedings of the 2Oth international joint conference on Artifical intelligence, pp. 774779. Morgan Kaufmann Publishers Inc., 2007.\nlex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013\nG. Hinton. Neural networks for machine learning. Coursera, video lectures, 2012\nJudy Hoffman, Eric Tzeng, Jeff Donahue, Yangqing Jia, Kate Saenko, and Trevor Darrell. One-shot adaptatior of supervised deep convolutional models. arXiv preprint arXiv:1312.6204, 2013.\nGao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Weinberger. Deep networks with stochastic depth arXiv preprint arXiv:1603.09382, 2016.\nYoshua Bengio. Learning deep architectures for ai. Foundations and trends(?) in Machine Learning, 2(1):1-127 2009.\nunyoung Chung, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. Gated feedback recurrent neura. networks. In Proceedings of the 32nd International Conference on Machine Learning (CML). 2015\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory.. Neural computation, 9(8):1735-1780, 1997\nMarcus Hutter. The human knowledge compression contest. 2012. URLhttp: //pri ze. hutter1. net/\nNal Kalchbrenner, Ivo Danihelka, and Alex Graves. Grid long short-term memory. arXiv preprint. arXiv:1507.01526, 2015.\nYoon Kim. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882, 2014\nDiederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013\nLingpeng Kong, Chris Dyer, and Noah A Smith. Segmental recurrent neural networks. arXiv preprin arXiv:1511.06018, 2015\nJan Koutnik, Klaus Greff, Faustino Gomez, and Jurgen Schmidhuber. A clockwork rnn. In Proceedings of the 31st International Conference on Machine Learning (ICML 2014), 2014.\nAlex Krizhevsky. Ilya Sutskever. and Geoffrey E Hinton. Imagenet classification with deep convolutional neura networks. In Advances in Neural Information Processing Systems, pp. 1097-1105, 2012.\nYann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436-444, 2015.\nWang Ling, Isabel Trancoso, Chris Dyer, and Alan W Black. Character-based neural machine translation. arXiv preprint arXiv:1511.04586. 2015.\nIatthew V Mahoney. Adaptive weighing of context models for lossless data compression. 2005\nMitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus o english: The penn treebank. Computational linguistics, 19(2):313-330, 1993.\nTomas Mikolov, Martin Karafiat, Lukas Burget, Jan Cernocky, and Sanjeev Khudanpur. Recurrent neural network based language model. In INTERSPEECH, volume 2, pp. 3, 2010\nMichael C Mozer. Induction of multiscale temporal structure. Advances in neural information processing systems, pp. 275-275, 1993.\nMarius Pachitariu and Maneesh Sahani. Regularization and nonlinearities for neural language models: when are they needed? arXiv preprint arXiv:1301.5650, 2013.\nYoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. Character-aware neural language models arXiv preprint arXiv:1508.06615, 2015.\nTsungnan Lin, Bill G Horne, Peter Tino, and C Lee Giles. Learning long-term dependencies in narx recurrent neural networks. IEEE Transactions on Neural Networks, 7(6):1329-1338, 1996..\nMatthew V Mahoney. Large text compression benchmark. URL: http://www. mattmahoney. net/text/text. html 2009.\nAndriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. In Proceedings oJ the 31st International Conference on Machine Learning (ICML-14). pp. 1791-1799. 2014\nVinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In Proceeding. of the 27th International Conference on Machine Learning (ICML-10). pp. 807-814. 2010\nRazvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks arXiv preprint arXiv:1211.5063, 2012\nKamil M Rocki. Recurrent memory array structures. arXiv preprint arXiv:1607.03085, 2016a\nJurgen Schmidhuber. Neural sequence chunkers. 1991\nJurgen Schmidhuber. Learning complex, extended sequences using the principle of history compression. Neural Computation, 4(2):234-242, 1992.\nJurgen Schmidhuber. Deep learning in neural networks: An overview. Neural Networks, 61:85-117, 2015\nIlya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advance. in Neural Information Processing Systems, pp. 3104-3112, 2014.\nThe Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller Dzmitry Bahdanau, Nicolas Ballas, Frederic Bastien, Justin Bayer, Anatoly Belikov, et al. Theano: A python framework for fast computation of mathematical expressions. arXiv preprint arXiv:1605.02688, 2016.\nRonald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning Machine learning, 8(3-4):229-256, 1992.\nYuhuai Wu, Saizheng Zhang, Ying Zhang, Yoshua Bengio, and Ruslan Salakhutdinov. On multiplicative integration with recurrent neural networks. arXiv preprint arXiv:1606.06630, 2016.\nJulian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutnik, and Jurgen Schmidhuber. Recurrent highwa networks. arXiv preprint arXiv:1607.03474, 2016.\nIlya Sutskever, James Martens, and Geoffrey E Hinton. Generating text with recurrent neural networks. In Proceedings of the 28th International Conference on Machine Learning (ICML'11). pp. 1017-1024. 2011\nKavukcuoglu, et al. Strategic attentive writer for learning macro-actions. arXiv preprint arXiv:1606.04695 2016. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural image caption generator. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3156 3164, 2015."}] |
S1xh5sYgx | [{"section_index": "0", "section_name": "SOUEEZENET: ALEXNET-LEVEL ACCURACY WITH 5OX FEWER PARAMETERS AND <O.5MB MODEL SIZE", "section_text": "Forrest N. Iandola1, Song Han?, Matthew W. Moskewicz', Khalid Ashrafl William J. Dally2. Kurt Keutzer\nRecent research on deep convolutional neural networks (CNNs) has focused pri narily on improving accuracy. For a given accuracy level, it is typically possi ble to identify multiple CNN architectures that achieve that accuracy level. With equivalent accuracy, smaller CNN architectures offer at least three advantages: (1 Smaller CNNs require less communication across servers during distributed train- ing. (2) Smaller CNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller CNNs are more feasible to deploy on FP GAs and other hardware with limited memory. To provide all of these advantages we propose a small CNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally with model compression techniques, we are able to compress SqueezeNet to less than 0.5MB (510 smaller than AlexNet). The SqueezeNet architecture iS available for download here:\nMuch of the recent research on deep convolutional neural networks (CNNs) has focused on increas- ing accuracy on computer vision datasets. For a given accuracy level, there typically exist multiple CNN architectures that achieve that accuracy level. Given equivalent accuracy, a CNN architecture with fewer parameters has several advantages:\n'For example, the Xilinx Vertex-7 FPGA has a maximum of 8.5 MBytes (i.e. 68 Mbits) of on-chip memory and does not provide off-chip memory.."}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "More efficient distributed training. Communication among servers is the limiting factor to the scalability of distributed CNN training. For distributed data-parallel training, com- munication overhead is directly proportional to the number of parameters in the model (Ian- dola et al.|2016). In short, small models train faster due to requiring less communication. Less overhead when exporting new models to clients. For autonomous driving, compa- nies such as Tesla periodically copy new models from their servers to customers' cars. This practice is often referred to as an over-the-air update. Consumer Reports has found that the safety of Tesla's Autopilot semi-autonomous driving functionality has incrementally improved with recent over-the-air updates (Consumer Reports! 2016). However, over-the- air updates of today's typical CNN/DNN models can require large data transfers. With AlexNet, this would require 240MB of communication from the server to the car. Smaller models require less communication, making frequent updates more feasible. Feasible FPGA and embedded deployment. FPGAs often have less than 10MB1|of on- chip memory and no off-chip memory or storage. For inference, a sufficiently small model could be stored directly on the FPGA instead of being bottlenecked by memory band- width (Qiu et al.[2016), while video frames stream through the FPGA in real time. Further, when deploying CNNs on Application-Specific Integrated Circuits (ASICs), a sufficiently small model could be stored directly on-chip, and smaller models may enable the ASiC to fit on a smaller die.\nThe rest of the paper is organized as follows. In Section2|we review the related work. Then, in Sections 3|and4|we describe and evaluate the SqueezeNet architecture.After that, we turn our attention to understanding how CNN architectural design choices impact model size and accuracy. We gain this understanding by exploring the design space of SqueezeNet-like architectures. In Section 5] we do design space exploration on the CNN microarchitecture, which we define as the organization and dimensionality of individual layers and modules. In Section|6] we do design space exploration on the CNN macroarchitecture, which we define as high-level organization of layers in a CNN. Finally, we conclude in Section[7] In short, Sections3|and4|are useful for CNN researchers as well as practitioners who simply want to apply SqueezeNet to a new application. The remaining sections are aimed at advanced researchers who intend to design their own CNN architectures\nCOMPRESSION The overarching goal of our work is to identify a model that has very few parameters while preserv- ing accuracy. To address this problem, a sensible approach is to take an existing CNN model and compress it in a lossy fashion. In fact, a research community has emerged around the topic of model compression, and several approaches have been reported. A fairly straightforward approach by Den ton et al. is to apply singular value decomposition (SVD) to a pretrained CNN model (Denton et al. 2014). Han et al. developed Network Pruning, which begins with a pretrained model, then replaces parameters that are below a certain threshold with zeros to form a sparse matrix, and finally performs a few iterations of training on the sparse CNN (Han et al.l|2015b). Recently, Han et al. extended their work by combining Network Pruning with quantization (to 8 bits or less) and huffman encoding to create an approach called Deep Compression (Han et al.2015a), and further designed a hardware accelerator called EIE (Han et al.2016a) that operates directly on the compressed model, achieving substantial speedups and energy savings.\nCROARCHIIECIORE Convolutions have been used in artificial neural networks for at least 25 years; LeCun et al. helpe. to popularize CNNs for digit recognition applications in the late 1980s (LeCun et al.]1989). Ir neural networks, convolution filters are typically 3D, with height, width, and channels as the ke. dimensions. When applied to images, CNN filters typically have 3 channels in their first layer (i.e. RGB), and in each subsequent layer L, the filters have the same number of channels as L-1 has. filters. The early work by LeCun et al. (LeCun et al.]1989) uses 5x5xChannels2[filters, and the recent VGG (Simonyan & Zisserman2014) architectures extensively use 3x3 filters. Models sucl as Network-in-Network (Lin et al.||2013) and the GoogLeNet family of architectures (Szegedy et al. 2014}Ioffe & Szegedy2015 Szegedy et al. 2015} 2016) use 1x1 filters in some layers.\nWith the trend of designing very deep CNNs, it becomes cumbersome to manually select filter di mensions for each layer. To address this, various higher level building blocks, or modules, comprisec of multiple convolution layers with a specific fixed organization have been proposed. For example the GoogLeNet papers propose Inception modules, which are comprised of a number of different di mensionalities of filters, usually including 1x1 and 3x3, plus sometimes 5x5 (Szegedy et al.|2014 and sometimes 1x3 and 3x1 (Szegedy et al.[|2015). Many such modules are then combined, perhap with additional ad-hoc layers, to form a complete network. We use the term CNN microarchitectur to refer to the particular organization and dimensions of the individual modules.\nFrom now on, we will simply abbreviate HxWxChannels to HxW\nAs you can see, there are several advantages of smaller CNN architectures. With this in mind, we focus directly on the problem of identifying a CNN architecture with fewer parameters but equivalent. accuracy compared to a well-known model. We have discovered such an architecture, which we call. SqueezeNet. In addition, we present our attempt at a more disciplined approach to searching the. design space for novel CNN architectures.\nPerhaps the mostly widely studied CNN macroarchitecture topic in the recent literature is the impact of depth (i.e. number of layers) in networks. Simoyan and Zisserman proposed the VGG (Simonyan & Zisserman2014) family of CNNs with 12 to 19 layers and reported that deeper networks produce higher accuracy on the ImageNet-1k dataset (Deng et al. |2009). K. He et al. proposed deeper CNNs with up to 30 layers that deliver even higher ImageNet accuracy (He et al.]2015a).\nThe choice of connections across multiple layers or modules is an emerging area of CNN macroar- chitectural research. Residual Networks (ResNet) (He et al.2015b) and Highway Networks (Sri- vastava et al.2015) each propose the use of connections that skip over multiple layers, for example additively connecting the activations from layer 3 to the activations from layer 6. We refer to these connections as bypass connections. The authors of ResNet provide an A/B comparison of a 34-layer CNN with and without bypass connections; adding bypass connections delivers a 2 percentage-point improvement on Top-5 ImageNet accuracy.\n2.4 NEURAL NETWORK DESIGN SPACE EXPLORATION Neural networks (including deep and convolutional NNs) have a large design space, with numerot. options for microarchitectures, macroarchitectures, solvers, and other hyperparameters. It seem. natural that the community would want to gain intuition about how these factors impact a NN. accuracy (i.e. the shape of the design space). Much of the work on design space exploration (DSE. of NNs has focused on developing automated approaches for finding NN architectures that delive. higher accuracy. These automated DSE approaches include bayesian optimization (Snoek et al. 2012), simulated annealing (Ludermir et al.|2006), randomized search (Bergstra & Bengiof 2012 and genetic algorithms (Stanley & Miikkulainen2002). To their credit, each of these papers prc. vides a case in which the proposed DSE approach produces a NN architecture that achieves highe. accuracy compared to a representative baseline. However, these papers make no attempt to provid intuition about the shape of the NN design space. Later in this paper, we eschew automated ap. proaches - instead, we refactor CNNs in such a way that we can do principled A/B comparisons t. investigate how CNN architectural decisions influence model size and accuracy..\nIn the following sections, we first propose and evaluate the SqueezeNet architecture with and with- out model compression. Then, we explore the impact of design choices in microarchitecture and macroarchitecture for SqueezeNet-like CNN architectures.\nIn this section, we begin by outlining our design strategies for CNN architectures with few param. eters. Then, we introduce the Fire module, our new building block out of which to build CNN architectures. Finally, we use our design strategies to construct SqueezeNet, which is comprised. mainly of Fire modules.\nARCHITECTURAL DESIGN STRATEGIES Our overarching objective in this paper is to identify CNN architectures that have few parameters while maintaining competitive accuracy. To achieve this, we employ three main strategies when designing CNN architectures:\nStrategy 1. Replace 3x3 filters with 1x1 filters. Given a budget of a certain number of convolution filters, we will choose to make the majority of these filters 1x1, since a 1x1 filter has 9X fewer parameters than a 3x3 filter.\nStrategy 3. Downsample late in the network so that convolution layers have large activation maps. In a convolutional network, each convolution layer produces an output activation map with. a spatial resolution that is at least 1x1 and often much larger than 1x1. The height and width of. these activation maps are controlled by: (1) the size of the input data (e.g. 256x256 images) and (2\nStrategy 2. Decrease the number of input channels to 3x3 filters. Consider a convolution layer. that is comprised entirely of 3x3 filters. The total quantity of parameters in this layer is (number of . input channels) * (number of filters) * (3*3). So, to maintain a small total number of parameters in a CNN, it is important not only to decrease the number of 3x3 filters (see Strategy 1 above), but. also to decrease the number of input channels to the 3x3 filters. We decrease the number of input. channels to 3x3 filters using squeeze layers, which we describe in the next section..\nsqueeze 1x1 convolution filters. ReLU expand 1x1 and 3x3 convolution filters ReLU\nFigure 1: Microarchitectural view: Organization of convolution filters in the Fire module. In this example, S1x1 = 3, e1x1 = 4, and e3x3 = 4. We illustrate the convolution filters but not the activations.\nthe choice of layers in which to downsample in the CNN architecture. Most commonly, downsam pling is engineered into CNN architectures by setting the (stride > 1) in some of the convolution or. pooling layers (e.g. (Szegedy et al.[2014] Simonyan & Zisserman2014] Krizhevsky et al.[2012) If early3|layers in the network have large strides, then most layers will have small activation maps.. Conversely, if most layers in the network have a stride of 1, and the strides greater than 1 are con-. centrated toward the end'[of the network, then many layers in the network will have large activation. maps. Our intuition is that large activation maps (due to delayed downsampling) can lead to higher. classification accuracy, with all else held equal. Indeed, K. He and H. Sun applied delayed down-. sampling to four different CNN architectures, and in each case delayed downsampling led to higher. classification accuracy (He & Sun2015)."}, {"section_index": "2", "section_name": "3.2 THE FIRE MODULE", "section_text": "We define the Fire module as follows. A Fire module is comprised of: a squeeze convolution layer (which has only 1x1 filters), feeding into an expand layer that has a mix of 1x1 and 3x3 convolution filters; we illustrate this in Figure|1 The liberal use of 1x1 filters in Fire modules is an application of Strategy 1 from Section 3.1 We expose three tunable dimensions (hyperparameters) in a Fire module: S1x1, e1x1, and e3x3. In a Fire module, S1x1 is the number of filters in the squeeze layer (all 1x1), e1x1 is the number of 1x1 filters in the expand layer, and e3x3 is the number of 3x3 filters in the expand layer. When we use Fire modules we set s1x1 to be less than (e1x1 + e3x3), so the squeeze layer helps to limit the number of input channels to the 3x3 filters, as per Strategy 2 from Section3.1\nWe now describe the SqueezeNet CNN architecture.We illustrate in Figure 2|that SqueezeNet begins with a standalone convolution layer (conv1), followed by 8 Fire modules (fire2-9), ending with a final conv layer (conv1O). We gradually increase the number of filters per fire module from the beginning to the end of the network. SqueezeNet performs max-pooling with a stride of 2 after layers conv1, fire4, fire8, and conv10; these relatively late placements of pooling are per Strategy 3 from Section|3.1 We present the full SqueezeNet architecture in Table[1\nIn our terminology, an \"early\" layer is close to the input data. 4In our terminology, the \"end' of the network is the classifier.\nStrategies 1 and 2 are about judiciously decreasing the quantity of parameters in a CNN while attempting to preserve accuracy. Strategy 3 is about maximizing accuracy on a limited budget of parameters. Next, we describe the Fire module, which is our building block for CNN architectures that enables us to successfully employ Strategies 1, 2, and 3."}, {"section_index": "3", "section_name": "3.3.1 OTHER SOUEEZENET DETAILS", "section_text": "We released the SqueezeNet configuration files in the format defined by the Caffe CNN frame- work. However, in addition to Caffe, several other CNN frameworks have emerged, including MXNet (Chen et al.]2015a), Chainer (Tokui et al.]2015), Keras (Chollet2016), and Torch (Col- lobert et al.]2011]. Each of these has its own native format for representing a CNN architec- ture. That said, most of these libraries use the same underlying computational back-ends such as cuDNN (Chetlur et al.] 2014) and MKL-DNN (Das et al.J2016). The research community has\nconv1 conv1 conv1 96 96 96 maxpool/2 maxpool/2 maxpool/2 96 fire2 fire2 fire2 conv1x1 128 128 128 fire3 fire3 fire3 128 128 128 fire4 fire4 fire4 conv1x1 256 256 256 maxpool/2 maxpool/2 maxpool/2 fire5 fire5 fire5 256 256 256 fire6 fire6 fire6 conv1x1 384 384 384 fire7 fire7 fire7 384 384 >384 fire8 fire8 fire8 conv1x1] 512 512 512 maxpool/2 maxpool/2 maxpool/2 fire9 fire9 fire9 512 512 512 conv10 conv10 conv10 1000 1000 1000 global avgpool global avgpool global avgpool \"labrador retriever softmax softmax softmax dog\"\nFigure 2: Macroarchitectural view of our SqueezeNet architecture. Left: SqueezeNet (Section|3.3) Middle: SqueezeNet with simple bypass (Section|6); Right: SqueezeNet with complex bypass (Sec. tion[6).\nFor brevity, we have omitted number of details and design choices about SqueezeNet from Table|1 and Figure[2 We provide these design choices in the following. The intuition behind these choices. nay be found in the papers cited below..\nSo that the output activations from 1x1 and 3x3 filters have the same height and width, we add a 1-pixel border of zero-padding in the input data to 3x3 filters of expand modules. ReLU (Nair & Hinton2010) is applied to activations from squeeze and expand layers. Dropout (Srivastava et al.[2014) with a ratio of 50% is applied after the fire9 module. Note the lack of fully-connected layers in SqueezeNet; this design choice was inspired by the NiN (Lin et al.|2013) architecture. When training SqueezeNet, we begin with a learning rate of O.04, and we lin- early decrease the learning rate throughout training, as described in (Mishkin et al. 2016). For details on the training protocol (e.g. batch size, learning rate, parame- ter initialization), please refer to our Caffe-compatible configuration files located here: https://github.com/DeepScale/SqueezeNet The Caffe framework does not natively support a convolution layer that contains multiple filter resolutions (e.g. 1x1 and 3x3) (Jia et al.]2014). To get around this, we implement our expand layer with two separate convolution layers: a layer with 1x1 filters, and a layer with 3x3 filters. Then, we concatenate the outputs of these layers together in the channel dimension. This is numerically equivalent to implementing one layer that contains both 1x1 and 3x3 filters.\nMXNet (Chen et al.f[2015a) port of SqueezeNet: (Haria]2016 Chainer (Tokui et al. 2015) port of SqueezeNet: (Bell2016) Keras (Chollet)2016) port of SqueezeNet: (DT42][2016) Torch (Collobert et al. 2011) port of SqueezeNet's Fire Modules: (Waghn\nWe now turn our attention to evaluating SqueezeNet. In each of the CNN model compression paper. reviewed in Section 2.1] the goal was to compress an AlexNet (Krizhevsky et al.[ 2012) mode. that was trained to classify images using the ImageNet (Deng et al.[2009) (ILSVRC 2012) datase Therefore, we use AlexNeland the associated model compression results as a basis for compariso. when evaluating SqueezeNet.\nTable 1: SqueezeNet architectural dimensions. (The formatting of this table was inspired by the Inception2 paper (Ioffe & Szegedy|2015).)\nfilter size / S1x1 e1x1 e3x3 #parameter #parameter layer stride $1x1 e1x1 e3x3 output size depth # bits before after name/type (if not a fire (#1x1 (#1x1 (#3x3 sparsity sparsity sparsity pruning squeeze) expand) expand) pruning layer) input image 224x224x3 - conv1 111x111x96 7x7/2 (x96) 1 100%(7x7) 6bit 14,208 14,208 maxpool1 55x55x96 3x3/2 0 fire2 55x55x128 2 16 64 64 100% 100% 33% 6bit 11,920 5,746 fire3 55x55x128 2 16 64 64 100% 100% 33% 6bit 12,432 6,258 fire4 55x55x256 2 32 128 128 100% 100% 33% 6bit 45,344 20,646 maxpool4 27x27x256 3x3/2 0 fire5 27x27x256 2 32 128 128 100% 100% 33% 6bit 49,440 24,742 fire6 27x27x384 2 48 192 192 100% 50% 33% 6bit 104,880 44,700 fire7 27x27x384 2 48 192 192 50% 100% 33% 6bit 111,024 46,236 fire8 27x27x512 2 64 256 256 100% 50% 33% 6bit 188,992 77,581 maxpool8 13x12x512 3x3/2 0 fire9 13x13x512 2 64 256 256 50% 100% 30% 6bit 197,184 77,581 conv10 13x13x1000 1x1/1 (x1000) 1 20% (3x3) 6bit 513,000 103,400 avgpool10 1x1x1000 13x13/1 0 1,248,424 421,098 activations parameters compression info (total) (total)\nIn Table[2] we review SqueezeNet in the context of recent model compression results. The SVD- based approach is able to compress a pretrained AlexNet model by a factor of 5x, while diminishing top-1 accuracy to 56.0% (Denton et al.|2014). Network Pruning achieves a 9x reduction in model size while maintaining the baseline of 57.2% top-1 and 80.3% top-5 accuracy on ImageNet (Han et al.]2015b). Deep Compression achieves a 35x reduction in model size while still maintaining the baseline accuracy level (Han et al.2015a). Now, with SqueezeNet, we achieve a 50X reduction in model size compared to AlexNet, while meeting or exceeding the top-1 and top-5 accuracy of AlexNet. We summarize all of the aforementioned results in Table[2\nIt appears that we have surpassed the state-of-the-art results from the model compression commu nity: even when using uncompressed 32-bit values to represent the model, SqueezeNet has a 1.4 smaller model size than the best efforts from the model compression community while maintain- ing or exceeding the baseline accuracy. Until now, an open question has been: are small models amenable to compression, or do small models \"need\" all of the representational power afforded by dense floating-point values? To find out, we applied Deep Compression (Han et al.]2015a).\n5Our baseline is bv1c_a1exnet from the Caffe codebase (Jia et al.l|2014\nIn addition, these results demonstrate that Deep Compression (Han et al.]2015a) not only works. well on CNN architectures with many parameters (e.g. AlexNet and VGG), but it is also able tc. compress the already compact, fully convolutional SqueezeNet architecture. Deep Compressior compressed SqueezeNet by 10 while preserving the baseline accuracy. In summary: by combin. ing CNN architectural innovation (SqueezeNet) with state-of-the-art compression techniques (Deep. Compression), we achieved a 510 reduction in model size with no decrease in accuracy compared. to the baseline.\nFinally, note that Deep Compression (Han et al.[2015b) uses a codebook as part of its scheme for quantizing CNN parameters to 6- or 8-bits of precision. Therefore, on most commodity processors, it is not trivial to achieve a speedup of 32 = 4x with 8-bit quantization or 32 = 5.3x with 6-bit quantization using the scheme developed in Deep Compression. However, Han et al. developed custom hardware - Efficient Inference Engine (EIE) - that can compute codebook-quantized CNNs more efficiently (Han et al.]2016a). In addition, in the months since we released SqueezeNet, P. Gysel developed a strategy called Ristretto for linearly quantizing SqueezeNet to 8 bits (Gysel 2016). Specifically, Ristretto does computation in 8 bits, and it stores parameters and activations in 8-bit data types. Using the Ristretto strategy for 8-bit computation in SqueezeNet inference, Gysel observed less than 1 percentage-point of drop in accuracy when using 8-bit instead of 32-bit data types."}, {"section_index": "4", "section_name": "CNN MICROARCHITECTURE DESIGN SPACE EXPLORATION", "section_text": "So far, we have proposed architectural design strategies for small models, followed these principles to create SqueezeNet, and discovered that SqueezeNet is 50x smaller than AlexNet with equivalent accuracy. However, SqueezeNet and other models reside in a broad and largely unexplored design space of CNN architectures. Now, in Sections 5[and [6] we explore several aspects of the design space. We divide this architectural exploration into two main topics: microarchitectural exploration (per-module layer dimensions and configurations) and macroarchitectural exploration (high-level end-to-end organization of modules and other layers).\nIn this section, we design and execute experiments with the goal of providing intuition about the shape of the microarchitectural design space with respect to the design strategies that we proposec. in Section|3.1 Note that our goal here is not to maximize accuracy in every experiment, but rathe. to understand the impact of CNN architectural choices on model size and accuracy..\n6Note that, due to the storage overhead of storing sparse matrix indices, 33% sparsity leads to somewhat less than a 3 decrease in model size\nTable 2: Comparing SqueezeNet to model compression approaches. By model size, we mean the number of bytes required to store all of the parameters in the trained model..\nCNN architecture Compression Approach Data Original -- Reduction in. Top-1 Top-5 Type Compressed Model Model Size ImageNet ImageNet Size vs. AlexNet Accuracy Accuracy AlexNet None (baseline) 32 bit 240MB 1x 57.2% 80.3% AlexNet SVD [Denton et al. 32 bit 240MB -> 48MB 5x 56.0% 79.4% 2014 AlexNet Network Pruning Han. 32 bit 240MB ->27MB 9x 57.2% 80.3% et al.2015b AlexNet Deep 5-8 bit 240MB -> 6.9MB 35x 57.2% 80.3% Compression Han et al.2015a SqueezeNet (ours) None 32 bit 4.8MB 50x 57.5% 80.3% SqueezeNet (ours) Deep Compression 8 bit 4.8MB -> 0.66MB 363x 57.5% 80.3% SqueezeNet (ours) Deep Compression 6 bit 4.8MB > 0.47MB 510x 57.5% 80.3%\nto SqueezeNet, using 33% sparsity|and 8-bit quantization. This yields a 0.66 MB model (363 smaller than 32-bit AlexNet) with equivalent accuracy to AlexNet. Further, applying Deep Compres. sion with 6-bit quantization and 33% sparsity on SqueezeNet, we produce a 0.47MB model (510 smaller than 32-bit AlexNet) with equivalent accuracy. Our small model is indeed amenable to compression.\nSqueeze Ratio (SR) Percentage of 3x3 filters (pct3x3) 0.1250.25 0.5 0.75 1.0 1.0 12.5 25.037.5 50.0 62.5 75.0 87.5 99.0 100 100 (%) Aceancee SqueezeNet 85.3% 86.0% 85.3% 85.3% 80.3% accuracy accuracy 76.3% accuracy accuracy accuracy accuracy 80 80 13 MB of 19 MB of 0 13 MB of 21 MBof 4.8 MB of weights weights 5.7 MB of weights weights weights 60 60 weights : S-do 40 40 20 20 : * : 0 :. - :. : 0 4.87.6 13 19 24 5.7 7.4 9.3 1113 15 17 19 21 MB of weights in model. MB of weights in model. (a) Exploring the impact of the squeeze ratio (SR) (b) Exploring the impact of the ratio of 3x3 filters i on model size and accuracy.. expand layers (pct3x3) on model size and accuracy.\n5.1 CNN MICROARCHITECTURE METAPARAMETERS In SqueezeNet, each Fire module has three dimensional hyperparameters that we defined in Sec- tion 3.2] s1x1, e1x1, and e3x3. SqueezeNet has 8 Fire modules with a total of 24 dimensional hyperparameters. To do broad sweeps of the design space of SqueezeNet-like architectures, we define the following set of higher level metaparameters which control the dimensions of all Fire modules in a CNN. We define base, as the number of expand filters in the first Fire module in a CNN. After every freg Fire modules, we increase the number of expand filters by incre. In other\nIn Section[3.1] we proposed decreasing the number of parameters by using squeeze layers to decrease the number of input channels seen by 3x3 filters. We defined the squeeze ratio (SR) as the ratic between the number of filters in squeeze layers and the number of filters in expand layers. We now design an experiment to investigate the effect of the squeeze ratio on model size and accuracy..\nNote that, for a given model, all Fire layers share the same squeeze ratio.. *Note that we named it SqueezeNet because it has a low squeeze ratio (SR). That is, the squeeze layers ir SqueezeNet have 0.125x the number of filters as the expand layers..\n5.1 CNN MICROARCHITECTURE METAPARAMETERS In SqueezeNet, each Fire module has three dimensional hyperparameters that we defined in Sec-. tion 3.2 s1x1, e1x1, and e3x3. SqueezeNet has 8 Fire modules with a total of 24 dimensional. hyperparameters. To do broad sweeps of the design space of SqueezeNet-like architectures, we. define the following set of higher level metaparameters which control the dimensions of all Fire. modules in a CNN. We define basee as the number of expand filters in the first Fire module in a CNN. After every freq Fire modules, we increase the number of expand filters by incre. In other. words, for Fire module i, the number of expand filters is e; = basee + (incre *. ).In the f req expand layer of a Fire module, some filters are 1x1 and some are 3x3; we define e, = ei,1x1 + ei,3x3 with pct3x3 (in the range [0, 1], shared over all Fire modules) as the percentage of expand filters that are 3x3. In other words, e,3x3 = e; * pct3x3, and ei,1x1 = e; * (1 - pct3x3). Finally, we define. the number of filters in the squeeze layer of a Fire module using a metaparameter called the squeeze. ratio (SR) (again, in the range [0, 1], shared by all Fire modules): Si,1x1 = SR * e; (or equivalently. Si,1x1 = SR * (ei,1x1 + ei,3x3)). SqueezeNet (Table1) is an example architecture that we gen- erated with the aforementioned set of metaparameters. Specifically, SqueezeNet has the following. metaparameters: basee = 128, incre = 128, pct3x3 = 0.5, freq = 2, and SR = 0.125.\nIn these experiments, we use SqueezeNet (Figure2). as a starting point. As in SqueezeNet, these. experiments use the following metaparameters: basee = 128, incre = 128, pct3x3 = 0.5, and. freq = 2. We train multiple models, where each model has a different squeeze ratio (SRJ'Tin the range [0.125, 1.0]. In Figure[3(a)] we show the results of this experiment, where each point on the graph is an independent model that was trained from scratch. SqueezeNet is the SR=0.125 point in this figure|8 From this figure, we learn that increasing SR beyond O.125 can further increase. ImageNet top-5 accuracy from 80.3% (i.e. AlexNet-level) with a 4.8MB model to 86.0% with a 19MB model. Accuracy plateaus at 86.0% with SR=0.75 (a 19MB model), and setting SR=1.0 further increases model size without improving accuracy..\nWe use the following metaparameters in this experiment: basee = incre = 128, freq = 2, SR = 0.500, and we vary pct3x3 from 1% to 99%. In other words, each Fire module's expand layer has a. predefined number of filters partitioned between 1x1 and 3x3, and here we turn the knob on these filters from \"mostly 1x1\"' to \"mostly 3x3\". As in the previous experiment, these models have 8. Fire modules, following the same organization of layers as in Figure[2] we show the results of this experiment in Figure|3(b)] Note that the 13MB models in Figure|3(a)|and Figure 3(b)|are the same architecture: SR = 0.500 and pct3x3 = 50%. We see in Figure3(b)[that the top-5 accuracy plateaus at 85.6% using 50% 3x3 filters, and further increasing the percentage of 3x3 filters leads to a large. model size but provides no improvement in accuracy on ImageNet.."}, {"section_index": "5", "section_name": "6 CNN MACROARCHITECTURE DESIGN SPACE EXPLORATION", "section_text": "We illustrate these three variants of SqueezeNet in Figure|2\nOur simple bypass architecture adds bypass connections around Fire modules 3, 5, 7, and 9, requiring these modules to learn a residual function between input and output. As in ResNet, to implemen a bypass connection around Fire3, we set the input to Fire4 equal to (output of Fire2 + output oi Fire3), where the + operator is elementwise addition. This changes the regularization applied to the parameters of these Fire modules, and, as per ResNet, can improve the final accuracy and/or ability to train the full model.\nOne limitation is that, in the straightforward case, the number of input channels and number o. output channels has to be the same; as a result, only half of the Fire modules can have simple. bypass connections, as shown in the middle diagram of Fig[2 When the \"same number of channels'. requirement can't be met, we use a complex bypass connection, as illustrated on the right of Figure|2 While a simple bypass is \"just a wire,\" we define a complex bypass as a bypass that includes a 1x1. convolution layer with the number of filters set equal to the number of output channels that are. needed. Note that complex bypass connections add extra parameters to the model, while simple. bypass connections do not.\nIn addition to changing the regularization, it is intuitive to us that adding bypass connections woulc. help to alleviate the representational bottleneck introduced by squeeze layers. In SqueezeNet, th squeeze ratio (SR) is O.125, meaning that every squeeze layer has 8x fewer output channels than the. accompanying expand layer. Due to this severe dimensionality reduction, a limited amount of in. formation can pass through squeeze layers. However, by adding bypass connections to SqueezeNet. we open up avenues for information to flow around the squeeze layers..\nWe trained SqueezeNet with the three macroarchitectures in Figure 2|and compared the accuracy and model size in Table3] We fixed the microarchitecture to match SqueezeNet as described in Table [1 throughout the macroarchitecture exploration. Complex and simple bypass connections both yielded an accuracy improvement over the vanilla SqueezeNet architecture. Interestingly, the simple bypass enabled a higher accuracy accuracy improvement than complex bypass. Adding the\n9To be clear, each filter is 1x1xChannels or 3x3xChannels, which we abbreviate to 1x1 and 3x3\nVGG (Simonyan & Zisserman|2014) architectures have 3x3 spatial resolution in most layers' filters; GoogLeNet (Szegedy et al.[[2014) and Network-in-Network (NiN) (Lin et al.||2013) have 1x1 filters in some layers. In GoogLeNet and NiN, the authors simply propose a specific quantity of 1x1 and 3x3 filters without further analysis|?Here, we attempt to shed light on how the proportion of 1x1 and 3x3 filters affects model size and accuracy.\nSo far we have explored the design space at the microarchitecture level, i.e. the contents of individual modules of the CNN. Now, we explore design decisions at the macroarchitecture level concerning the high-level connections among Fire modules. Inspired by ResNet (He et al.2015b), we explored three different architectures:\nVanilla SqueezeNet (as per the prior sections). SqueezeNet with simple bypass connections between some Fire modules. (Inspired by (Sri vastava et al.2015]He et al.2015b).) zeNet with comnlex hx\nsimple bypass connections yielded an increase of 2.9 percentage-points in top-1 accuracy and 2.2 percentage-points in top-5 accuracy without increasing model size."}, {"section_index": "6", "section_name": "7 CONCLUSIONS", "section_text": "In this paper, we have proposed steps toward a more disciplined approach to the design-space explo. ration of convolutional neural networks. Toward this goal we have presented SqueezeNet, a CNN. architecture that has 50 fewer parameters than AlexNet and maintains AlexNet-level accuracy on ImageNet. We also compressed SqueezeNet to less than 0.5MB, or 510 smaller than AlexNei without compression. Since we released this paper as a technical report in 2016, Song Han anc his collaborators have experimented further with SqueezeNet and model compression. Using a new. approach called Dense-Sparse-Dense (DsD) (Han et al.2016b), Han et al. use model compres. sion during training as a regularizer to further improve accuracy, producing a compressed set of. SqueezeNet parameters that is 1.2 percentage-points more accurate on ImageNet-1k, and also pro. ducing an uncompressed set of SqueezeNet parameters that is 4.3 percentage-points more accurate,. compared to our results in Table2\nWe mentioned near the beginning of this paper that small models are more amenable to on-chip. implementations on FPGAs. Since we released the SqueezeNet model, Gschwend has developed a variant of SqueezeNet and implemented it on an FPGA (Gschwend] 2016). As we anticipated, Gschwend was able to able to store the parameters of a SqueezeNet-like model entirely within the FPGA and eliminate the need for off-chip memory accesses to load model parameters..\nIn the context of this paper, we focused on ImageNet as a target dataset. However, it has becom common practice to apply ImageNet-trained CNN representations to a variety of applications suc. as fine-grained object recognition (Zhang et al.]2013) Donahue et al.[2013), logo identification ii. images (Iandola et al.[2015), and generating sentences about images (Fang et al.[2015). ImageNet. trained CNNs have also been applied to a number of applications pertaining to autonomous driv. ing, including pedestrian and vehicle detection in images (Iandola et al.]2014, Girshick et al.. 2015,[Ashraf et al.] 2016) and videos (Chen et al.[2015b), as well as segmenting the shape of th road (Badrinarayanan et al.||2015). We think SqueezeNet will be a good candidate CNN architectur. for a variety of applications, especially those in which small model size is of importance.."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "Khalid Ashraf, Bichen Wu, Forrest N. Iandola, Matthew W. Moskewicz, and Kurt Keutzer. Shalloy networks for high-accuracy road object-detection. arXiv:1606.01561, 2016\nVijay Badrinarayanan, Alex Kendall, and Roberto Cipolla. SegNet: A deep convolutional encoder decoder architecture for image segmentation. arxiv:1511.00561, 2015.\nTable 3: SqueezeNet accu1. and model size using different macroarchitecture configurations\nSqueezeNet is one of several new CNNs that we have discovered while broadly exploring the de sign space of CNN architectures. We hope that SqueezeNet will inspire the reader to consider and. explore the broad range of possibilities in the design space of CNN architectures and to perform that. exploration in a more systematic manner.\nXiaozhi Chen. Kaustav Kundu. Yukun Zhu, Andrew G Berneshawi, Huimin Ma, Sanja Fidler, an Raquel Urtasun. 3d object proposals for accurate object class detection. In NIPs, 2015b\nRonan Collobert, Koray Kavukcuoglu, and Clement Farabet. Torch7: A matlab-like environmen for machine learning. In NIPS BigLearn Workshop, 2011.\nDipankar Das, Sasikanth Avancha, Dheevatsa Mudigere, Karthikeyan Vaidyanathan, Srinivas Srid- haran, Dhiraj D. Kalamkar, Bharat Kaul, and Pradeep Dubey. Distributed deep learning using. synchronous stochastic gradient descent. arXiv:1602.06709, 2016\nRoss B. Girshick, Forrest N. Iandola, Trevor Darrell, and Jitendra Malik. Deformable part models are convolutional neural networks. In CVPR, 2015.\nDavid Gschwend. Zynqnet: An fpga-accelerated embedded convolutional neural network. Master's thesis, Swiss Federal Institute of Technology Zurich (ETH-Zurich), 2016.\nPhilipp Gysel. Ristretto: Hardware-oriented approximation of convolutional neural networks arXiv:1605.06402, 2016.\nS. Han, H. Mao, and W. Dally. Deep compression: Compressing DNNs with pruning, trained quantization and huffman coding. arxiv:1510.00149v3, 2015a. S. Han, J. Pool, J. Tran, and W. Dally. Learning both weights and connections for efficient neural networks. In NIPS, 2015b\nSong Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A Horowitz, and William Dally. Eie: Efficient inference engine on compressed deep neural network. International Symp. sium on Computer Architecture (1SCA), 2016a.\nSong Han, Jeff Pool, Sharan Narang, Huizi Mao, Shijian Tang, Erich Elsen, Bryan Catanzaro, John Tran, and William J. Dally. Song han and jeff pool and sharan narang and huizi mao and shijian tang and erich elsen and bryan catanzaro and john tran and william j. dally. arXiv:1607.04381, 2016b.\nKaiming He and Jian Sun. Convolutional neural networks at constrained time cost. In CVPR, 2015\nForrest N. Iandola, Anting Shen, Peter Gao, and Kurt Keutzer. DeepLogo: Hitting logo recognition with the deep neural network hammer. arXiv:1510.02131, 2015.\nForrest N. Iandola, Khalid Ashraf, Matthew W. Moskewicz, and Kurt Keutzer. FireCaffe: near-linear acceleration of deep neural network training on compute clusters. In CVPR. 2016\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training b reducing internal covariate shift. JMLR. 2015.\nAlex Krizhevsky. Ilya Sutskever. and Geoffrey E. Hinton. ImageNet Classification with Deep Con volutional Neural Networks. In NIPS, 2012.\nMin Lin, Qiang Chen, and Shuicheng Yan. Network in network. arXiv:1312.4400, 2013\nDmytro Mishkin, Nikolay Sergievskiy, and Jiri Matas. Systematic evaluation of cnn advances on the imagenet. arXiv:1606.02228, 2016.\nJiantao Qiu, Jie Wang, Song Yao, Kaiyuan Guo, Boxun Li, Erjin Zhou, Jincheng Yu, Tianqi Tang. Ningyi Xu, Sen Song, Yu Wang, and Huazhong Yang. Going deeper with embedded fpga platform for convolutional neural network. In ACM International Symposium on FPGA, 2016..\nNitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov Dropout: a simple way to prevent neural networks from overfitting. JMLR, 2014\nT.B. Ludermir, A. Yamazaki, and C. Zanchettin. An optimization methodology for neural network weights and architectures. IEEE Trans. Neural Networks, 2006.\nChristian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Re thinking the inception architecture for computer vision. arXiv:1512.00567, 2015.\nNing Zhang, Ryan Farrell, Forrest Iandola, and Trevor Darrell. Deformable part descriptors fo fine-grained recognition and attribute prediction. In ICCV, 2013."}] |
By5e2L9gl | [{"section_index": "0", "section_name": "TRUSTING SVM FOR PIECEWISE LINEAR CNNs", "section_text": "Leonard Berradal, Andrew Zisserman' and M. Pawan Kumar1,2\nWe present a novel layerwise optimization algorithm for the learning objective of Piecewise-Linear Convolutional Neural Networks (PL-CNNs), a large class of convolutional neural networks. Specifically, PL-CNNs employ piecewise. linear non-linearities such as the commonly used ReLU and max-pool, and. an SVM classifier as the final layer. The key observation of our approach. is that the problem corresponding to the parameter estimation of a layer. can be formulated as a difference-of-convex (DC) program, which happens. to be a latent structured SVM. We optimize the DC program using the concave-convex procedure, which requires us to iteratively solve a structured SVM problem. This allows to design an optimization algorithm with an optimal learning rate that does not require any tuning. Using the MNIST. CIFAR and ImageNet data sets, we show that our approach always improves. over the state of the art variants of backpropagation and scales to large data. and large network settings"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Ihe backpropagation algorithm is commonly employed to estimate the parameters of s. convolutional neural network (CNN) using a supervised training data set (Rumelhart et al. 1986). Part of the appeal of backpropagation comes from the fact that it is applicable tc. a wide variety of networks, namely those that have (sub-)differentiable non-linearities an employ a (sub-)differentiable learning objective. However, the generality of backpropagatior. comes at the cost of a high sensitivity to its hyperparameters such as the learning rat. and momentum. Standard line-search algorithms cannot be used on the primal objectiv. function in this setting, as (i) there may not exist a step-size guaranteeing a monotoni. decrease because of the use of sub-gradients, and (ii) even in the smooth case, each functioi. evaluation requires a forward pass over the entire data set without any update, making th. approach computationally unfeasible. Choosing the learning rate thus remains an open issue. with the state-of-the-art algorithms suggesting adaptive learning rates (Duchi et al.2011. Zeiler2012] Kingma & Ba,2015). In addition, techniques such as batch normalization 7Ioffe. & Szegedy2015) and dropout (Srivastava et al.2014) have been introduced to respectively reduce the sensitivity to the learning rate and to prevent from overfitting..\nWith this work, we open a different line of inquiry, namely, is it possible to design more robus optimization algorithms for special but useful classes of CNNs? To this end, we focus on th. networks that are commonly used in computer vision. Specifically, we consider CNNs witl. convolutional and dense layers that apply a set of piecewise linear (PL) non-linear operations. to obtain a discriminative representation of an input image. While this assumption ma sound restrictive at first, we show that commonly used non-linear operations such as ReLl. and max-pool fall under the category of PL functions. The representation obtained in thi. way is used to classify the image via a multi-class SVM, which forms the final layer of th. network. We refer to this class of networks as PL-CNN..\nWe design a novel, principled algorithm to optimize the learning objective of a PL-CNN Our algorithm is a layerwise method, that is, it iteratively updates the parameters of one layer while keeping the other layers fixed. For this work, we use a simple schedule over the"}, {"section_index": "2", "section_name": "A BSTRACT", "section_text": "layers, namely, repeated passes from the output layer to the input one. However, it may. be possible to further improve the accuracy and efficiency of our algorithm by designing. more sophisticated scheduling strategies. The key observation of our approach is that the. parameter estimation of one layer of PL-CNN can be formulated as a difference-of-convex DC) program that can be viewed as a latent structured SVM problem (Yu & Joachims,. 2009) This allows us to solve the DC program using the concave-convex procedure (CCCP). (Yuille & Rangarajan 2002). Each iteration of CCCP requires us to solve a convex structured. SVM problem. To this end, we use the powerful block-coordinate Frank-Wolfe (BCFW. algorithm (Lacoste-Julien et al.|2013), which solves the dual of the convex program iteratively. by computing the conditional gradients corresponding to a subset of training samples. In order to further improve BCFW for PL-CNNs, we extend it in three important ways. First we introduce a trust-region term that allows us to initialize the BCFW algorithm using. the current estimate of the layer parameters. Second, we reduce the memory requirement of BCFW by an order of magnitude, via an efficient representation of the feature vectors. corresponding to the dense layers. Third, we show that, empirically, the number of constraints of the structural SVM problem can be reduced substantially without any loss in accuracy which allows us to significantly reduce its time complexity..\nCompared to backpropagation (Rumelhart et al.] 1986) or its variants (Duchi et al. 2011 Zeiler 2012 Kingma & Ba2015), our algorithm offers three advantages. First, the CCCI algorithm provides a monotonic decrease in the learning objective at each layer. Since layerwise optimization itself can be viewed as a block-coordinate method, our algorithn guarantees a monotonic decrease of the overall objective function after each layer's parameter have been updated. Second, since the dual of the SVM problem is a smooth convex quadrati program, each step of the BCFW algorithm (in the inner iteration of the CCCP) provides a monotonic increase in its dual objective. Third, since the only step-size required ii our approach comes while solving the SVM dual, we can use the optimal step-size that i computed analytically during each iteration of BCFw (Lacoste-Julien et al.] 2013). In othe words, our algorithm has no learning rate, initial or not, that requires tuning.\nUsing standard network architectures and publicly available data sets, we show that our algorithm provides a boost over the state of the art variants of backpropagation for learning PL-CNNs and we demonstrate scalability of the method..\nWhile some of the early successful approaches for the optimization of deep neural network. relied on greedy layer-wise training (Hinton et al.]2006) Bengio et al.. 2007), most currentl used methods are variants of backpropagation (Rumelhart et al.f|1986) with adaptive learning rates, as discussed in the introduction..\nAt every iteration, backpropagation performs a forward pass and a backward pass on th. network, and updates the parameters of each layer by stochastic or mini-batch gradien descent. This makes the choice of the learning rate critical for efficient optimization. Duch et al.(2011) have proposed the Adagrad convex solver, which adapts the learning rat. for every direction and takes into account past updates. Adagrad changes the learnin. rate to favor steps in gradient directions that have not been observed frequently in pas updates. When applied to the non-convex CNN optimization problem, Adagrad may converg. prematurely due to a rapid decrease in the learning rate (Goodfellow et al.]2016). In order t. prevent this behavior, the Adadelta algorithm (Zeiler 2012) makes the decay of the learning. rate slower. It is worth noting that this fix is empirical, and to the best of our knowledge. provides no theoretical guarantees. Kingma & Ba(2015) propose a different scheme for th. learning rate, called Adam, which uses an online estimation of the first and second moment of the gradients to provide centered and normalized updates. However all these method still require the tuning of the initial learning rate to perform well..\nSecond-order and natural gradient optimization methods have also been a subject of attention. The focus in this line of work has been to come up with appropriate approximations to make the updates cheaper. Martens & Sutskever(2012) suggested a Hessian-free second order optimization using finite differences to approximate the Hessian and conjugate gradient to.\ncompute the update. Martens & Grosse(2015) derive an approximation of the Fisher matrix. inverse, which provides a more efficient method for natural gradient descent. Ollivier(2013. explore a set of Riemannian methods based on natural gradient descent and quasi-Newtor methods to guarantee reparametrization invariance of the problem.Desjardins et al.(2015. demonstrate a scaled up natural gradient descent method by training on the ImageNe. data set (Russakovsky et al.,2015). Though providing more informative updates and solic. theoretical support than SGD-based approaches, these methods do not take into account the. structure of the problem offered by the commonly used non-linear operations..\nOur work is also related to some of the recent developments in optimization for deep learning. For example, Taylor et al.(2016) use ADMM for massive distribution of computation in a. layer-wise fashion, and in particular their method will yield closed-form updates for any PI. CNN.Lee et al.(2015) propose to use targets instead of gradients to propagate informatior. through the network, which could help to extend our algorithm. Zhang et al.(2016) derive a. convex relaxation for the learning objective for a restricted class of CNNs, which also relie on solving an approximate convex problem. In (Amos et al.]2016), the authors identif:. convex problems for the inference task, when the neural network is a convex function o some of its inputs.\nMore generally, we believe that our hitherto unknown observation regarding the relationship between PL-CNNs and latent SVMs can (i) allow the progress made in one field to be transferred to the other and (ii) help design a new generation of principled algorithms for deep learning optimization."}, {"section_index": "3", "section_name": "PIECEWISE LINEAR CONVOLUTIONAL NEURAL NETWORKS", "section_text": "A piecewise linear convolutional neural network (PL-CNN) consists of a series of convolutiona layers, followed by a series of dense layers, which provides a concise representation of ar input image. Each layer of the network performs two operations: a linear transformatior (that is, a convolution or a matrix multiplication), followed by a piecewise linear non-linea operation such as ReLU or max-pool. The resulting representation of the image is used fo classification via an SVM. In the remainder of this section, we provide a formal descriptior of PL-CNN.\nPiecewise Linear Functions. A piecewise linear (PL) function f(u) is a function of the following form (Melzer1986) :\nf(u) =max{au}-max{bu} iE[m] jE[n]\nwhere [m] = {1,... , m}, and [n] = {1, ... ,n}. Each of the two maxima above is a convex function, therefore such a function f is not generally convex, but it is rather a difference of two convex functions. Importantly, many commonly used non-linear operations such as ReLU or max-pool are PL functions of their input. For example, ReLU corresponds to the function R(v) = max{v, 0} where v is a scalar. Similarly, max-pool for a D-dimensional. vector u corresponds to M(u) = maxie[D]{e, u}, where e; is a vector whose i-th element is. 1 and all other elements are 0. Given a value of u, we say that (i*,j*) is the activation of. the PL function at u if i* = argmaxie[m]{a, u} and j* = argmax,e[nj{bJ u}.\nWith a more theoretical approach, Goel et al.(2016) propose an algorithm to learn shallow ReLU nets with guarantees of time convergence and generalization error. Heinemann et al. [2016) show that a subclass of neural networks can be modeled as an improper kernel, which then reduces the learning problem to a simple SVM with the constructed kernel.\nPL-CNN Parameters. We denote the parameters of an L layer PL-CNN by W = {W';l e [Ll}. In other words, the parameters of the l-th layer is defined as Wl. The CNN. defines a composite function, that is, the output z'-1 of layer l - 1 is the input to the layer l. Given the input z'-1 to layer l, the output is computed as z' = o'(wl . z-1), where \", is. either a convolution or a matrix multiplication, and o' is a PL non-linear function, such as ReLU or max-pool. The input to the first layer is an image x, that is, zo = x. We denote.\nPrediction. Given an image x, a PL-CNN predicts its class using the following rule\nIn other words, the dot product of the D-dimensional representation of x with the SVM parameter for a class y provides the score for the class. The desired prediction is obtained. by maximizing the score over all possible classes.\nN min max (yi,Yi) +(Wsvm - Wsvm D(x W,Wsvm yi Yi EV lE[L]U{svm}\nIn order to enable layerwise optimization of PL-CNNs, we show that parameter estimatioi of a layer can be formulated as a difference-of-convex (DC) program (subsection 4.1). Thi allows us to use the concave-convex procedure, which solves a series of convex optimizatior problems (subsection|4.2). We show that each convex problem closely resembles a structurec SVM objective, which can be addressed by the powerful block-coordinate Frank-Wolf (BCFW) algorithm. We extend BCFW to improve its initialization, time complexity anc memory requirements, thereby enabling its use in learning PL-CNNs (subsection4.3). Fo the sake of clarity, we only provide sketches of the proofs for those propositions that ar necessary for understanding the paper. The detailed proofs of the remaining propositions are provided in the Appendix.."}, {"section_index": "4", "section_name": "4.1 LAYERWISE OPTIMIZATION AS A DC PROGRAM", "section_text": "Given the values of the parameters for the convolutional and the dense layers (that is, W) the learning objective (3) is the standard SVM problem in parameters Wsvm. In other words. it is a convex optimization problem with several efficient solvers (Tsochantaridis et al.]2004 Joachims et al.]2009] Shalev-Shwartz et al.2009), including the BCFW algorithm (Lacoste- Julien et al. 2013). Hence, the optimization of the final layer is a computationally easy problem. In contrast, the optimization of the parameters of a convolutional or a dense layer l does not result in a convex program. In general, this problem can be arbitrarily hard to solve. However, in the case of PL-CNN, we show that the problem can be formulated as a specific type of DC program, which enables efficient optimization via the iterative use of BCFW. The key property that enables our approach is the following proposition that shows that the composition of PL functions is also a PL function.\nProposition 1. Consider PL functions g : Rm -> R and gi : Rn > R, for all i E [m] Define a function f : Rn > R as f(u) = g([g1(u),g2(u),... ,gm(u)]'). Then f is also a PL function (proof in Appendix|A).\nthe input to the final layer by zL = (x; W) E RD. In other words, given an image x, the convolutional and dense layers of a PL-CNN provide a D-dimensional representation of x to the final classification layer. The final layer of a PL-CNN is a C class SVM wsvm, which specifies one parameter Wsvm E RD for each class y E V.\n= argmax Wsvm(x; W) yEV\nLearning Objective. Given a training data set D = {(x, yi),i E [N]}, where x, is the input image and yi is its ground-truth class, we wish to estimate the parameters W U wsvm of the PL-CNN. To this end, we minimize a regularized upper bound on the empirical risk The risk of a prediction y* given the ground-truth yi is measured with a user-specified loss function (y+, yi). For example, the standard 0 1 loss has a value of 0 for a correct prediction and 1 for an incorrect prediction. Formally, the parameters of a PL-CNN are estimated using the following learning objective:\nThe hyperparameter A denotes the relative weight of the regularization compared to the upper. bound of the empirical risk. Note that, due to the presence of piecewise linear non-linearities. the representation (; W) (and hence, the above objective) is highly non-convex in the PL-CNN parameters.\nUsing the above proposition, we can reformulate the problem of optimizing the parameters of one layer of the network as a DC program. Specifically, the following proposition shows that the problem can be formulated as a latent structured SVM objective (Yu & Joachims!2009)\nProposition 2. The learning objective of a PL-CNN with respect to the parameters of the l-th layer can be specified as follows..\nSketch of the Proof. For a given image x with the ground-truth class y, consider the input to the layer l, which we denote by z'-1. Since all the layers except the l-th one are fixed, the input z'-1 is a constant vector, which only depends on the image x (that is, its value does. not depend on the variables w'). In other words, we can write z'-1 = (x)..\nGiven the input z'-1, all the elements of the output of the l-th layer, denoted by z', are a PL function of W' since the layer performs a linear transformation of z'-1 according to. the parameters Wl, followed by an application of PL operations such as ReLU or max-pool The vector z' is then fed to the (l + 1)-th layer. The output z'+1 of the (l + 1)-th layer is a vector whose elements are PL functions of z'. Therefore, by proposition (1), the elements of z'+1 are a PL function of Wl. By applying the same argument until we reach the layer L,. we can conclude that the representation (x: W) is a PL function of Wl.\nNext. consider the ur oper bound of the empirical risk. which is specified as follows\n(y,y) + (Wsvm max - Wsvm y yEV\nOnce again, since Wsvm is fixed, the above upper bound can be interpreted as a PL function. of (x; W), and thus, by proposition (1), the upper bound is a PL function of Wl. It. only remains to observe that the learning objective (3) also contains the Frobenius norm of Wl. Thus, it follows that the estimation of the parameters of layer l can be reformulated as minimizing the sum of its Frobenius norm and the PL upper bound of the empirical risk over all training samples, as shown in problem (4). Note that we have ignored the constants corresponding to the Frobenius norm of the parameters of all the fixed layers. This constitutes an existential proof of Proposition 2 In the next paragraph, we give an intuition about the feature vectors I(x, yi, h) and the latent space H..\nFeature Vectors & Latent Space. The exact form of the joint feature vectors depends on the explicit DC decomposition of the objective function. In Appendix|B we detail the practical computations and give an example: we construct two interleaved neural networks whose outputs define the convex and concave parts of the DC objective function. Given the explicit DC objective function, the feature vectors are given by a subgradient and can therefore be obtained by automatic differentiation.\nWe now give an intuition of what the latent space H represents. Consider an input image. x and a corresponding latent variable h E H. The latent variable can be viewed as a set of variables hk, k E {l + 1,... , L}. In other words, each subset hF of the latent variable. corresponds to one of the layers of the network that follow the layer l. Intuitively, hk. represents the choice of activation at layer k when going through the PL activation: for. each neuron j of layer k, h takes value i if and only if the i-th piece of the piecewise linear activation is selected. For instance, i is the index of the selected input in the case of a max-pooling unit.\nNote that the latent space only depends on the layers that follow the current layer being optimized. This is due to the fact that the input z'-1 to the l-th layer is a constant vector.\nN max((yi,yi) +(W')(xi,Yi,hi)) -max((W')T(xi,Yi,hi)) min Wl hi EH 1 Yi EV\nthat does not depend on the value of wl. However, the activations of all subsequent layers following the l-th one depend on the value of the parameters Wl. As a consequence, the greater the number of following layers, the greater the size of the latent space, and this growth happens to be exponential. However, as will be seen shortly, it is still possible to efficiently optimize problem (4) for all the layers of the network despite this exponential increase."}, {"section_index": "5", "section_name": "4.2 CONCAVE-CONVEX PROCEDURE", "section_text": "The optimization problem (4) is a DC program in the parameters Wl. This follows from. the fact that the upper bound of the empirical risk is a PL function, and can therefore be expressed as the difference of two convex PL functions (Melzer, 1986). Furthermore. the Frobenius norm of wl is also a convex function of wl. This observation allows us to. obtain an approximate solution of problem (4) using the iterative concave-convex procedure. (CCCP) (Yuille & Rangarajan2002)\nAlgorithm|1|describes the main steps of CCCP. In step 3, we impute the best value of the latent variable corresponding to the ground-truth class y; for each training sample. This imputation corresponds to the linearization step of the CCCP. The selected latent variable corresponds to a choice of activations at each non-linear layer of the network, and therefore defines a path of activations to the ground truth. Next, in step 4, we update the parameters by solving a convex optimization problem. This convex problem amounts to finding the path of activations which minimizes the maximum margin violations given the path to the ground truth defined in step 3.\nThe CCCP algorithm has the desirable property of providing a monotonic decrease in the objective function at each iteration. In other words, the objective function value of itself can be viewed as a block-coordinate algorithm for minimizing the learning objective (3) our overall algorithm provides guarantees of monotonic decrease until convergence. This is one of the main advantages of our approach compared to backpropagation and its variants which fail to provide similar guarantees on the value of the objective function from one iteration to the next.\nAlgorithm 1 CCCP for parameter estimation of the l-th layer of the PL-CNN.\nIn order to solve the convex program (7), which corresponds to a structured SVM problem. we make use of the powerful BCFW algorithm (Lacoste-Julien et al.[2013) that solve. its dual via conditional gradients. This has two main advantages: (i) as the dual is a. smooth quadratic program, each iteration of BCFW provides a monotonic increase in its.\nh* = argmax(W)'(xi, yi,h) hEH\nN ((yi,Yi)+(W')T(xi,Yi,hi)) = argmi max +1 Wl Yi EV h; EH ((Wl)T(xi,Yi,ht))\nobjective; and (ii) the optimal step-size at each iteration can be computed analytically. This is once again in stark contrast to backpropagation, where the estimation of the step-size is still an active area of research (Duchi et al. 2011: Zeiler 2012] Kingma & Ba2015).As shown byLacoste-Julien et al.(2013), given the current estimate of the parameters Wl, the conditional gradient of the dual of program (7) with respect to a training sample (x, yi) can be obtained by solving the following problem:\n(yi,hi) = argmax(W)'(xi,y,h) + (y,yi) yEV,hEH\nThe overall efficiency of the CCCP algorithm relies on our ability to solve problems (6. and (8). At first glance, these problems may appear to be computationally intractable as the latent space H can be very large, especially for layers close to the input (of the order ol millions of dimensions for a typical network). However, the following proposition shows that. both the problems can be solved efficiently using the forward and backward passes that are employed in backpropagation.\nProposition 3. Given the current estimate Wl of the parameters for the l-th layer, as well. as the parameter values of all the other fixed layers, problems (6) and (8) can be solved using a forward pass on the network. Furthermore, the joint feature vectors I(xi, yi, h) and. I(xi, yi, h*) can be computed using a backward pass on the network..\nSketch of the Proof. Recall that the latent space consists of the putative activations for each. PL operation in the layers following the current one. Thus, intuitively, the maximization over. the latent variables corresponds to finding the exact activations of all such PL operations. In other words, we need to identify the indices of the linear pieces that are used to compute the value of the PL function in the current state of the network. For a ReLU operation, this. corresponds to estimating max{0, v}, where the input to the ReLU is a scalar v. Similarly, for. a max-pool operation, this corresponds to estimating max,{e' u}, where u is the input vecto. to the max-pool. This is precisely the computation that the forward pass of backpropagatior. performs. Given the activations, the joint feature vector is the subgradient of the sample. with respect to the current layer. Once again, this is precisely what is computed during the. backward pass of the backpropagation algorithm..\nAn example is constructed in Appendix[B to illustrate how to compute the feature vectors in practice."}, {"section_index": "6", "section_name": "4.3 IMPROVING THE BCFW ALGORITHM", "section_text": "As the BCFW algorithm was originally designed to solve a structured SVM problem, it. requires further extensions to be suitable for training a PL-CNN. In what follows, we present three such extensions that improve the initialization, memory requirements and time. complexity of the BCFW algorithm respectively.\nTrust-Region for Initialization. The original BCFW algorithm starts with an initia. parameter W' = 0 (that is, all the parameters are set to 0). The reason for this initialization is that it is possible to compute the dual variables that correspond to the 0 primal variable However, since our algorithm visits each layer of the network several times, it would be desirable to initialize its parameters using its current value Wt. To this end, we introduce a trust-region in the constraints of problem (7), or equivalently, an l2 norm based proxima term in its objective function (Parikh & Boyd,2014). The following proposition shows that this has the desired effect of initializing the BCFW algorithm close to the current parameter values.\nProposition 4. By adding a proximal term ||wl _ Wf|l? to the objective function in 7 we can compute a feasible dual solution whose corresponding primal solution is equal t ,Wf. Furthermore, the addition of the proximal term still allows us to efficiently compute X+ n dioD\nIn practice, we always choose a value of = 10X: this yields an initialization of ~ 0.9w which does not significantly change the value of the objective function\nEfficient Representation of Joint Feature Vectors. The BCFW algorithm requires us to store a linear combination of the feature vectors for each mini-batch. While this requirement is not too stringent for convolutional and multi-class SVM layers, where the dimensionality of the feature vectors is small, it becomes prohibitively expensive for dense layers. The following proposition prevents a blow-up in the memory requirements of BCFW\nProposition 5. When optimizing dense layerl, if Wl e Rpxq, we can store a representation of the joint feature vectors I(x, y,h) with vectors of size p in problems (6) and (7). This is in contrast to the naive approach that requires them to be of size p q..\nReducing the Number of Constraints. In order to reduce the amount of time required for the BCFW algorithm to converge, we use the structure of H to simplify problem (7) to a much simpler problem. Specifically, since H represents the activations of the network for a given sample, it has a natural decomposition over the layers: H = Hj ... Ht. We use this structure in the following observation.\nObservation 1. Problem (7) can be approximately solved by optimizing the dual problen on increasingly large search spaces. In other words, we start with constraints of y, followec by Y HL, then V Ht Ht-1 and so on. The algorithm converges when the primal-dua gap is below tolerance."}, {"section_index": "7", "section_name": "5 EXPERIMENTS", "section_text": "Our experiments are designed to assess the ability of LW-SVM (Layer-Wise SVM, ou method) and the SGD baselines to optimize problem (3). To compare LW-SVM with the. state-of-the-art variants of backpropagation, we look at the training and testing accuracies. as well as the training objective value. Unlike dropout, which effectively learns an ensemble. model, we learn a single model using each baseline optimization algorithm. All experiments are conducted on a GPU (Nvidia Titan X) and use Theano (Bergstra et al.] 2010] Bastier et al.] 2012). We compare LW-SVM with Adagrad, Adadelta and Adam. For all data sets. we start at a good solution provided by these solvers and fine-tune it with LW-SVM. We. then check whether a longer run of the SGD solver reaches the same level of performance.\nThe practical use of the LW-SVM algorithm needs choices at the three following levels: hov to select the layer to optimize (i), when to stop the CCCP on each layer (ii) and when tc stop the convex optimization at each inner iteration of the CCCP (iii). These choices are detailed in the next paragraph.\nThe latent variables which are not optimized over are set to be the same as the ones selected. for the ground truth. Experimentally, we observe that for convolutional layers (architectures in section 5, restricting the search space to V yields a dual gap low enough to consider the. problem has converged. This means that in practice for these layers, problem (7) can be. solved by searching directions over the search space V instead of the much larger ) H. The intuition is that the norm of the difference-of-convex decomposition grows with the. number of activations selected differently in the convex and concave parts (see Appendix A for the decomposition of piecewise linear functions). This compels the path of activations to. be the same in the convex and the concave part to avoid large margin violations, especially. for convolutional layers which are followed by numerous non-linearities at the max-pooling. layers.\nThe layer-wise schedule of LW-SVM is as follows: as long as the validation accuracy increases. we perform passes from the end of the network (SVM) to the first layer (i). At each pass. each layer is optimized with one outer iteration of the CCCP (ii). The inner iteration are stopped when the dual objective function does not increase by more than 1% over ar epoch (iii). We point out that the dual objective function is cheap to compute since we are maintaining its value at all time. By contrast, to compute the exact primal objective function requires a forward pass over the data set without any update.."}, {"section_index": "8", "section_name": "5.1 MNIST DATA SET", "section_text": "Conv 12 filters 5 5 Conv 12 filters 5 5 Dense SVM + ReLU + ReLU 256 output units 10 classes + MaxPool 2x2 + MaxPool 2x2 + ReLU\nFigure 1: Network architecture for the MNIST data set\nMethod The number of epochs is set to 200, 100 and 100 for Adagrad, Adadelta and Adam - Adagrad is given more epochs as we observed it took a longer time to converge. We then use LW-SVM and compare the results on training objective, training accuracy and testing accuracy. We also let the solvers run to up to 500 epochs to verify that we have not stopped the optimization prematurely. The regularization hyperparameter and the initial learning rate are chosen by cross-validation. X is set to O.001 for all solvers, and the initia. learning rates can be found in Appendix[C] For LW-SVM, X is set to the same value as the baseline, and the proximal term to = 10X = 0.01.\nTable 1: Results on MNIST: we compare the performance of LW-SVM with SGD algorithms. on three metrics: training objective, training accuracy and testing accuracy. LW-SVM. outperforms Adadelta and Adam on all three metrics, with marginal improvements since those find already very good solutions..\nSolver (epochs) Training Training Time (s) Testing Objective Accuracy Accuracy Adagrad (200) 0.027 99.94% 707 99.22% Adagrad (500) 0.024 99.96% 1759 99.20% Adagrad (200) + LW-SVM 0.025 99.94% 707+366 99.21% Adadelta (100) 0.049 99.56% 124 98.96% Adadelta (500) 0.048 99.48% 619 99.05% Adadelta (100) + LW-SVM 0.033 99.85% 124+183 99.24% Adam (100) 0.038 99.76% 333 99.19% Adam (500) 0.038 99.72% 1661 99.23% Adam (100) + LW-SVM 0.029 99.89% 333+353 99.23%\nData set & Architecture The training data set consists in 60,o00 gray scale images of size 28 28 with 10 classes, which we split into 50,000 samples for training and 10,000 for validating. The images are normalized, and we do not use any data augmentation. The architecture used for this experiment is shown in Figure|1\nConv 12 filters 5 5 Conv 12 filters 5 5 Dense SVM + ReLU + ReLU 256 output units 10 classes + MaxPool 2x2 + MaxPool 2x2 + ReLU\nResults As Table 1 shows, LW-SVM systematically improves on all training objective training accuracy and testing accuracy. In particular, it obtains the best testing accuracy when combined with Adadelta. Because each convex sub-problem is run up to sufficient convergence, the objective function of LW-SVM features of monotonic decrease at each iteration of the CCCP (blue curves in first row of Figure2)\n0.10 0.10 0.10 Fnnneon Adagrad Adadelta Adam 0.09 0.09 0.09 LW SVM LW SVM LW SVM 0.08 0.08 0.08 0.07 0.07 0.07 0.06 0.06 0.06 0.05 0.05 0.05 0.04 0.04 0.04 0.03 0.03 0.03 0.02 0.02 0.02 0 200 400 600 800 1000 1200 0 50 100 150 200 250 300 350 0 100 200 300 400 500 600 700 100.0 100.0 100.0 99.5 99.5 99.5 99.0 99.0 99.0 98.5 98.5 98.5 Aeeenney 98.0 98.0 98.0 97.5 Training Adagrad 97.5 Training Adadelta 97.5 Training Adam. 97.0 Training LW-SVM 97.0 Training LW-SVM 97.0 Training LW-SVM Validation Adagrad Validation Adadelta Validation Adam 96.5 96.5 96.5 Validation LW-SVM Validation LW-SVM Validation LW-SVM 96.0 96.0 96.0 0 200 400 600 800 1000 1200 0 50 100 150 200 250 300 350 0 100 200 300 400 500 600 700\nFigure 2: Results on MNIsT of Adagrad, Adadelta and Adam followed by LW-SVM. W verify that switching to LW-SVM leads to better solutions than running SGD longer (shadec. continued plots).\nData sets & Architectures The CIFAR-10/100 data sets are comprised of 60,000 RGB natural images of size 32 32 with 10/100 classes (Krizhevsky2009)). We split the training. set into 45,000 training samples and 5,000 validation samples in both cases. The images are centered and normalized, and we do not use any data augmentation. To obtain a strong enough baseline, we employ (i) a pre-training with a softmax and cross-entropy loss and (ii Batch-Normalization (BN) layers before each non-linearity..\nWe have experimentally found out that pre-training with a softmax layer followed by a. cross-entropy loss led to better behavior and results than using an SVM loss alone. The. baselines are trained with batch normalization. Once they have converged, the estimated mean and standard deviation are fixed like they would be at test time. Then batch normalization becomes a linear transformation, which can be handled by the LW-SVM algorithm. This allows us to compare LW-SVM with a baseline benefiting from batch normalization. Specifically, we use the architecture shown in Figure3.\nConv 64 filters 3 3 Conv 128 filters 3 3 Conv 256 filters 3 3 + BN + ReLU + BN + ReLU + BN + ReLU SVM + Conv 64 filters 3 3 + Conv 128 filters 3 3 + Conv 256 filters 3 3 10 / 100 classes + BN + ReLU + BN + ReLU + BN + ReLU + MaxPool 2x2 + MaxPool 2x2 + MaxPool 2x2\nConv 64 filters 3 3 Conv 128 filters 3 3 Conv 256 filters 3 3 + BN + ReLU + BN + ReLU + BN + ReLU SVM + Conv 64 filters 3 3 + Conv 128 filters 3 3 + Conv 256 filters 3 3 10 / 100 classes + BN + ReLU + BN + ReLU + BN + ReLU + MaxPool 2x2 + MaxPool 2x2 + MaxPool 2x2\nFigure 3: Network architecture for the CIFAR data sets\nMethod Again, the initial learning rates and regularization weight X are obtained by cross-validation, and a value of 0.o01 is obtained for X for all solvers on both datasets. As before, is set to 10X. The initial learning rates are reported in Appendix C The layei schedule and convergence criteria are as described at the beginning of the section. For each SGD optimizer, we train the network for 10 epochs with a cross-entropy loss (preceded by a softmax layer). Then it is trained with an SVM loss (without softmax) for respectively 1000 100 and 100 epochs for Adagrad, Adadelta and Adam. This amount is doubled to verify that the baselines are not harmed by a premature stopping. Results are presented in Tables 2 and 3\nTable 2: Results on CIFAR-10: LW-SVM outperforms Adam and Adadelta on all three metrics. It improves on Adagrad, but does not outperform it - however Adagrad takes a long time to converge and does not obtain the best qeneralization\nSolver (epochs) Training Training Time (h) Testing Objective Accuracy Accuracy Adagrad (1000) 0.059 98.42% 10.58 83.15% Adagrad (2000) 0.009 100.00% 21.14 83.84% Adagrad (1000) + LW-SVM 0.012 100.00% 10.58+1.66 83.43% Adadelta (100) 0.113 97.96% 0.83 84.42% Adadelta (200) 0.054 99.83% 1.66 85.02% Adadelta (100) + LW-SVM 0.038 100.00% 0.83+0.68 86.62% Adam (100) 0.113 98.27% 0.83 84.18% Adam (200) 0.055 99.76% 1.65 82.55% Adam (100) + LW-SVM 0.034 100.00% 0.83+1.07 85.52%\nTable 3: Results on CIFAR-100: LW-SVM improves on all other solvers and obtains the best testing accuracy.\nFigure 4: Results on CIFAR-10 of Adagrad, Adadelta and Adam followed by LW-SVM. The successive drops of the training objective function with LW-SVM correspond to the passes over the layers.\nResults It can be seen from this set of results that LW-SVM always improves over the solution of the SGD algorithm, for example on CIFAR-100, decreasing the objective value of Adam from 0.22 to 0.06, or improving the test accuracy of Adadelta from 84.4% to 86.6% on\nSolver (epochs) Training Training Time (h) Testing Objective Accuracy Accuracy Adagrad (1000) 0.201 95.36% 10.68 54.00% Adagrad (2000) 0.044 99.98% 21.20 54.55% Adagrad (1000) + LW-SVM 0.062 99.98% 10.68+3.40 53.97% Adadelta (100) 0.204 95.68% 0.84 58.71% Adadelta (200) 0.088 99.90% 1.67 58.03% Adadelta (100) + LW-SVM 0.052 99.98% 0.84+1.48 61.20% Adam (100) 0.221 95.79% 0.84 58.32% Adam (200) 0.088 99.87% 1.66 57.81% Adam (100) + LW-SVM 0.059 99.98% 0.84+1.69 60.17% 0.14 Adagrad 0.14 Adadelta 0.14 Adam 0.12 LW_SVM 0.12 LW_SVM 0.12 LW_SVM 0.10 0.10 0.10 0.08 0.08 0.08 0.06 0.06 0.06 0.04 0.04 0.04 0.02 0.02 0.02 0.00 0.00 0.00 0 5 10 15 20 25 0.00.2 0.40.6 0.8 1.0 1.2 1.4 1.6 1.8 0.0 0.5 1.0 1.5 2.0 100 100 100 90 90 90 80 80 80 Aeeenney 70 70 70 Training Adagrad Training Adadelta Training Adam 60 Training LW-SVM 60 Training LW-SVM 60 Training LW-SVM Validation Adagrad Validation Adadelta Validation Adam 50 Validation LW-SVM 50 Validation LW-SVM 50 Validation LW-SVM\n0.40 0.40 0.40 Adagrad 0.35 0.35 Adadelta Adam 0.35 LW SVM LW SVM LW SVM 0.30 0.30 0.30 0.25 0.25 0.25 0.20 0.20 0.20 0.15 0.15 0.15 0.10 0.10 0.10 0.05 0.05 0.05 0.00 0.00 0.00 0 5 10 15 20 25 0.0 0.5 1.0 1.5 2.0 2.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 100 100 100 80 80 80 AAeeenrey 60 60 60 40 Training Adagrad 40 Training Adadelta 40 Training Adam Training LW-SVM Training LW-SVM Training LW-SVM 20 Validation Adagrad 20 Validation Adadelta 20 Validation Adam Validation LW-SVM Validation LW-SVM Validation LW-SVM 0 0 0 0 5 10 15 20 25 0.0 0.5 1.0 1.5 2.0 2.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Time (h)\nFigure 5: Results on CIFAR-100 of Adagrad, Adadelta and Adam followed by LW-SVM Although Adagrad keeps improving the training objective function, it takes much longer tc converge and the improvement on the training and testing accuracies rapidly become marginal\nCIFAR-10. The automatic step-size allows for a precise fine-tuning to optimize the training objective, while the regularization of the proximal term helps for better generalization."}, {"section_index": "9", "section_name": "5.3 IMAGENET DATA SET", "section_text": "We show results on the classification task of the ImageNet data set (Russakovsky et al. 2015). The ImageNet data set contains 1.2 million images for training and 50,000 images for validation, each of them mapped to one of the 1,o00 classes. For this experiment we use a VGG-16 network (configuration D in (Simonyan & Zisserman,2015)). We start with a pre-trained model as publicly available online, and we tune each of the dense layers as well as the final SVM layer with the LW-SVM algorithm. This experiment is designed to test the scalability of LW-SVM to large data sets and large networks, rather than comparing with the optimization baselines as before - indeed for any baseline, obtaining proper convergence as in previous experiments would take a very long time. We set the hyperparameters tc 0.001 and to 10X as previously. We budget five epochs per layer, which in total takes two days of training on a single GPU (Nvidia Titan X). At training time we used centered crops of size 224 224. The evaluation method is the same as the single test scale method described in (Simonyan & Zisserman] 2015). We report the results on the validation set in Table4] for the Pre-Trained model (PT) and the same model further optimized by LW-SVM (PT+LW-SVM):\nTable 4: Results on the 1,O00-way classification challenge of ImageNet on the validatio set, for the Pre-Trained model (PT) and the same model further optimized by LW-SVA PT+LW-SVM)\nSince the objective function penalizes the top-1 error, it is logical to observe that the improvement is most important on the top-1 accuracy. Importantly, having an efficient representation of feature vectors proves to be essential for such large networks: for instance in the optimization of the first fully connected layer with a batch-size of 100, the use of oui representation lowers the memory requirements of the BCFW algorithm from 7,600GB tc 20GB, which can then fit in the memory of a powerful computer."}, {"section_index": "10", "section_name": "6 DISCUSSION", "section_text": "We presented a novel layerwise optimization algorithm for a large and useful class of convolutional neural networks, which we term PL-CNNs. Our key observation is that the optimization of the parameters of one layer of a PL-CNN is equivalent to solving a latent structured SVM problem. As the problem is a DC program, it naturally lends itself to the iterative CCCP approach, which optimizes a convex structured SVM objective at each iteration. This allows us to leverage the advancements made in structured SVM optimization over the past decade to design a computationally feasible approach for learning PL-CNNs Specifically, we use the BCFW algorithm and extend it to improve its initialization, memory requirements and time complexity. In particular, this allows our method to not require the tuning of any learning rate. Using the publicly available MNIST, CIFAR-10 and CIFAR-100 data sets, we show that our approach provides a boost for learning PL-CNNs over the state of the art backpropagation algorithms. Furthermore, we demonstrate scalability of the method with results on the ImageNet data set with a large network.\nWhen the mean and standard deviation estimations of batch normalization are not fixe. unlike in our experiments with LW-SVM), batch normalization is not a piecewise linea transformation, and therefore cannot be used in conjunction with the BCFW algorithn for SVMs. However, it is difference-of-convex as it is a C2 function (Horst & Thoai. 1999 Incorporating a normalization scheme into our framework will be the object of future worl With our current methodology, LW-SVM algorithm can already be used on most standarc. architectures like VGG, Inception and ResNet-type architectures..\nIt is worth noting that other approaches for solving structured SVM problems, such as cutting-plane algorithms (Tsochantaridis et al.[2004] Joachims et al.[2009) and stochastic subgradient descent (Shalev-Shwartz et al.J 2009), also rely on the efficiency of estimating the conditional gradient of the dual. Hence, all these methods are equally applicable to our setting. Indeed, the main strength of our approach is the establishment of a hitherto unknown connection between CNNs and latent structured SVMs. We believe that our observation will allow researchers to transfer the substantial existing knowledge of DC programs in general. and latent SVMs specifically, to produce the next generation of principled optimization algorithms for deep learning. In fact, there are already several such improvements that can be readily applied in our setting, which were not explored only due to a lack of time. This includes multi-plane variants of BCFW (Shah et al.]2015f Osokin et al.]2016), as well as generalizations of Frank-Wolfe such as partial linearization (Mohapatra et al.T2016)."}, {"section_index": "11", "section_name": "ACKNOWLEDGMENTS", "section_text": "This work was supported by the EPSRC AIMS CDT grant EP/L015987/1, the EPSRC Programme Grant Seebibyte EP/M013774/1 and Yougov. Many thanks to A. Desmaison, R Bunel and D. Bouchacourt for the helpful discussions."}, {"section_index": "12", "section_name": "REFERENCES", "section_text": "James Bergstra, Olivier Breuleux, Frederic Bastien, Pascal Lamblin, Razvan Pascanu.. Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. Theano:. a CPU and GPU math expression compiler. Python for Scientific Computing Conference. (SciPy), 2010. Guillaume Desjardins, Karen Simonyan, Razvan Pascanu, et al. Natural neural networks. Conference on Neural Information Processing Systems, 2015.. John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online. learning and stochastic optimization. Journal of Machine Learning Research, 2011. o the ReII\nIan Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016\nFrederic Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow Arnaud Bergeron, Nicolas Bouchard, and Yoshua Bengio. Theano: new features and speed improvements, 2012.\nYoshua Bengio, Pascal Lamblin, Dan Popovici, Hugo Larochelle, et al. Greedy layer-wise training of deep networks. Conference on Neural Information Processing Systems, 2o07\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. International Conference on Machine Learning, 2015\nThorsten Joachims, Thomas Finley, and Chun-Nam John Yu. Cutting-plane training ol structural SVMs. Machine Learning, 2009.\nPritish Mohapatra, Puneet Dokania, CV Jawahar, and M Pawan Kumar. Partial linearization. based optimization for multi-class SVM. European Conference on Computer Vision, 2016\nOlga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng. Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, anc Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal o. Computer Vision, 2015.\nMatthew Zeiler. ADADELTA: an adaptive learning rate method. CoRR, 2012.\nYuchen Zhang, Percy Liang, and Martin J. Wainwright. Convexified convolutional neural networks. arXiv preprint arXiv:1609.01000, 2016\nAnton Osokin, Jean-Baptiste Alayrac, Isabella Lukasewitz, Puneet Dokania, and Simon Lacoste-Julien. Minding the gaps for block Frank-Wolfe optimization of structured SVMs. Inernational Conference on Machine Learning, 2016..\nNeal Parikh and Stephen Boyd. Proximal algorithms. Foundations and Trends in Optimiza tion, 2014.\nProof of Proposition (1. By the definition from (Melzer, 1986), we c. function as the difference of two point-wise maxima of linear functions:\nWhere all the g, g, are linear point-wise maxima of linear functions. Then\nIn each line of the last equality, we recognize a pointwise maximum of a linear combination of pointwise maxima of linear functions. This constitutes a pointwise maximum of linear functions."}, {"section_index": "13", "section_name": "B COMPUTING THE FEATURE VECTORS", "section_text": "We describe here how to compute the feature vectors in practice. To this end, we show how to construct two (intertwined) neural networks that decompose the objective function into a convex and a concave part. We call these Difference of Convex (DC) networks. Once the DC networks are defined, a standard forward and backward pass in the two networks yields the feature vectors for the convex and concave contribution to the objective function. First we derive how to perform a DC decomposition in linear and non-linear layers, and then we construct an example of DC networks.\nv) = max{a, V}- max,{b V jE[m+] kE[m_ And Vi E [n],gi(u)=g(u)-g(u\n(u) =g([g1(u),...,gn(u)] max,{a[g1u),.,gnu)]}-max,{b[g1u),..,gnu] jEm+ kE[m. max aj.i max jE[m+] kE[m max aj,ig(u) aj.i9; max jE[m+] kE[m. n aj,g(u) aj,ig(u) max aj,i9(u)+ jE[m+] =1 j'E[m+]\\{j} i=1 j'E[m+] i=1 n n bk,ig(u) bk,i9i(u) max +(u)+ kE[m_ k'E[m_]\\{k} i=1 k'E[m_] i=1 n aj,i9i bk,i9;(u) max aj,i9$(u)+ (u - jE[m+ j'E[m+]\\{j} i=1 k'E[m_] i=1 bkig(u)}+ajig max 6k ;q kE[m k'E[m_]\\{k} i=1 j'E[m+] i=1 aj,i9,(u)+ u)+ >>bk,i max aj,iJ jE[m+] j'E[m+]\\{j} i=1 k'E[m_] i=1 n n bk,i9(u)+ aj,i9,(u) max bk,i9$(u)+ kE[m_ k'E[m_]\\{k} i=1 j'E[m+] i=1\n(u) =g([g1(u),..,gn(u)] max,{a[g1u),,gn(u)]}-,max,{b[g1(u),.,9n(u)]} jE[m] max nax jE[m+ max aj,i9$(u) aj,iJi max bk,ig(u)- bk.ig jE[m+] kE[m_ i=1 i=1 i=1 n L aj,s9(u) max aj,ig+( (u) + aj,i9 jE[m+ j'E[m+]\\{j} i=1 j'E[m+] i=1 LL bk,ig(u) max ( Ok kE[m_ k'E[m_]\\{k} i=1 k'E[m_]i=1 Y n n aj,i9(u) bk,i9(u) max aj,i9$(u)+ jE[m+ j'E[m+]\\{j} i=1 k'E[m_] i=1 bx,ig(u)}+ aj,ig(u) max bk.ig(u)+ kE[m_ k'E[m_]\\{k} i=1 j'E[m+] i=1 n n aj,i9(u)+ bk,i9i max jE[m+ j'E[m+]\\{j} i=1 k'E[m_] i=1 n L aj,r9i max bk.ig* u)+ `bk,ig(u)+ kE[m_ k'E[m_]\\{k} i=1 j'E[m+] i=1\nW.u=(w+ :I CVX +W V VA CCV convex convex\nDC Decomposition in a Piecewise Linear Activation Layer For simplicity purposes we consider that the non-linear layer is a point-wise maximum across K scalar inputs, that is, for an input (ux)ke[K] E RK, the output is maxke[K] Ux (the general multi-dimensional case can be found in Appendix[A). We suppose that we have a DC decomposition (ucvx, uccv for each input k. Then we can write the following decomposition for the output of the layer.\nMaxPool(ucvx - uccv) = MaxPool(ucvx - uccv) + SumPool(uccv) - SumPool(uccv convex convex\nAn Example of DC Networks We use the previous observations to obtain a DC. decomposition in any layer. We now take the example of the neural network used for the experiments on the MNIST data set, and we show how to construct the two neural network. when optimizing W1, the weights of the first convolutional layer. First let us recall the. architecture without decomposition:.\nConv1 MaxPool Conv2 MaxPool Dense1 SVM 10 ReLU ReLU ReLU (wl) 2x2 (W2) 2x2 (W3) classes\nWe want to optimize the first convolutional layer, therefore we fix all other parameters Then we apply all operations as described in the previous paragraphs, which yields the DC. networks in Figure|7\nThe network graph in Figure|7|illustrates Proposition 3|for the optimization of w1: suppose we are interested in fcvx(x, WI), the convex part of the objective function for a given sample x, and we wish to obtain the feature vector needed to perform an update of BCFW. With a forward pass, the oracle for the latent and label variables (h, y) is efficiently computed; and with a backward pass, we obtain the corresponding feature vector I(x, y, h). Indeed, we recall from problem (8) that (h, y) are the latent and label variables maximizing fcvx(x, W1). Then given x, the forward pass in the DC networks sequentially solves the nested maximization: it maximizes the activation of the ReLU and MaxPooling units at each layer, thereby selecting the best latent variable h at each non-linear layer, and maximizes the output of the SVM\nDC Decomposition in a Linear Layer Let W be the weights of a fixed linear layer We introduce W+ = (|w|+ W) and W- = (W|- W). We can note that W+ and W- have exclusively non-negative weights, and that W = W+ - W-. Say we have an input u with the DC decomposition (ucvx, uccv), that is: u = ucvx _ uccv, where both ucvx and uccv are convex. Then we can decompose the output of the layer as:\nUk .CVX max uk = max CC kE[K] kE[K] max Uk CVX CCV Uk CCV Ui kE[K] iE[K],i#k kE[K] convex convex\nmax uk max (uk CVX kE[K] kE[K] cvx + (10) Uk cCV ) Uk CCV max Ui kE[K] iE[K],iFk kE[K] convex convex cular, for a ReLU, we can write: max(ucvx _ uccv, 0) = max(ucvx, uccv CCV (11) convex convex a Max-Pooling layer, one can easily verify that equation (10) is equivalent to: ool(ucvx _ uccv) = MaxPool(ucvx _ uccv) + SumPool(uccv) - SumPool(uccv) (12 convex convex\nCCV convex convex\nNon-Decompose Convex Concave Corresponding Network Network Network X X Convl (Wl) Convl (w1) + ReLU + ReLU + Max-Pool 2x2 + Max-Pool 2x2 Conv2 (W2+) Conv2 (W2-) Conv2 (W2) MAX ReLU MaxPool 2x2 SumPool 2x2 MaxPool 2x2 Dense1 (W3+) Dense1 (w3-) Dense1 (w3-) Dense1 (W3+) Dense1 (W3) A classification MAX ReLU loss SVM convex SVM concave SVM convex part concave part output\nFigure 7: Difference of Convex Networks for the optimization of Conv1 in the MNIS7 architecture. The two leftmost columns represent the DC networks. For each layer, th right column indicates the non-decomposed corresponding operation. Note that we represen the DC decomposition of the SVM layer as unique blocks to keep the graph simple. Giver the decomposition method for linear and non-linear layers, one can write down the explici operations without special difficulty.\nlayer, thereby selecting the best label y. At the end of the forward pass, fcvx(x, W1) is therefore available as the output of the convex network, and the feature vector I(x, y,h can be computed as a subgradient of fcvx(x, W1) with respect to W1..\nLinearizing the concave part is equivalent to fixing the activations of the DC networks. which can be done by using a fixed copy of W1 at the linearization point (all other weights. being fixed anyway). Then one can re-use the above reasoning to obtain the feature vectors. for the linearized concave part. Altogether, this methodology allows our algorithm to be implemented in any standard deep learning library (our implementation is available at.\nHyper-parameters The hyper-parameters are obtained by cross-validation with a search on powers of 10. In this section, n will denote the initial learning rate. We denote the Softmax + Cross-Entropy loss by SCE, while SVM stands for the usual Support Vector. Machines loss.\nTable 5: Hyper-parameters for the SGD solvers\nOne may note that the hyper-parameters are the same for both CIFAR-10 and CIFAR-100 for each combination of solver and loss. This makes sense since the initial learning rate mainly depends on the architecture of the network (and not so much on which particula images are fed to this network), which is very similar for the experiments on the CIFAR-10 and CIFAR-100 data sets."}, {"section_index": "14", "section_name": "SVM FORMULATION & DUAL DERIVATION", "section_text": "Multi-Class SVM Suppose we are given a data set of N samples, for which every sample. i has a feature vector $i E Rd and a ground truth label yi E V. For every possible labe.. Yi E V, we introduce the augmented feature vector i(Vi) E R|Y|d containing $; at index. Yi, -$i at index yi, and zeros everywhere else (then i(yi) is just a vector of zeros). We alsc. define (yi, yi) as the loss by choosing the output yi instead of the ground truth yi in ou. task. For classification, this is the zero-one loss for example..\nThe SVM optimization problem is formulated as.\nWhere X is the regularization hyperparameter. We now add a proximal term to a given starting point wo:\nMNIST CIFAR-10 CIFAR-10 CIFAR-100 CIFAR-100 (SCE) (SVM) (SCE) (SVM) n = 0.01 n = 0.01 n = 0.001 n = 0.01 n = 0.001 Adagrad X = 0.001 X = 0.001 X = 0.001 X = 0.001 X = 0.001 n =1 n =1 n = 0.1 n =1 n = 0.1 Adadelta X = 0.001 X = 0.001 X = 0.001 X = 0.001 X = 0.001 n = 0.001 n = 0.001 n = 0.0001 n = 0.001 n = 0.0001 Adam X = 0.001 X = 0.001 X = 0.001 X = 0.001 X = 0.001\nN 1 min N W,i i=1 ubject to: Vi E[N], Vyi E V, Si wf'yi(Yi)+(yi,Yi)\nN min Ei W,i N i=1 bject to: Vi E[N], Vyi E V,&i>wTWi(yi)+(yi,Yi\nFactorizing the second-order polynomial in w, we obtain the equivalent problem (changed b. a constant):\nN 1 min W,Ei 2 N A i=1 subject to: Vi E[N], Vyi E Y, SiwIyi(Yi)+ (yi,Yi\nDual Objective function The primal problem is:\nThe dual problem can be written as:\nThen we obtain the following KKT conditions.\nd. Vi E[N] 0 Qi(yi) =1 Yi EV N d. 11 =0->w=pwo Qi(Yi)Yi(Y dw NX+ i=1 yiEV Aa\nWe also introduce b = ((yi, yi))i,yi. We define Pn(V) as the sample-wise probability simplex:\nu E Pn(V) if: Vi E [N[, Vyi E V,u;(yi) > 0 Vi E [N], ui(Yi) =1 Yi EV\nTAa- A(i)wo- b(i) V(i)f(a)=(+ )\nN min Si W,i 2 N i=1 ubject to: Vi E[N],Vyi E Y, Si wI'yi(Yi)+(yi,Yi\nN N 1|Z 1+ ai(yi)((yi,yi)+wTyi(yi)-Ei) max min a>0 w,i N i=1 i=1 yiEV\nmax Aa|[2 + wf(Aa) + aT6 aEPn(Y) 2\nX+ f(a) = |Aa|2 - wT(Aa) - aTb 2\nBCFW derivation We write V@.f the gradient of f w.r.t. the block (i) of variables in a.. padded with zeros on blocks (j) for j / i. Similarly, A() and b() contain the rows of A and. the elements of b for the block of coordinates (i) and zeros elsewhere. We can write:\nThen the search corner for the block of coordinates (i) is given by.\nAa = pwo - w 1 1 Si(Yi)Wi(Yi) N X+ Yi EV 1 bTis= s}(Yi)(Yi,Yi) N Yi EV\nW; = -A l=b) Ws = - =\nThe optimal step size in the direction of the block of coordinates (i) is given by\n(Wi-ws)T(w-pwo) + pwf(wi-ws) l|Wi-Ws||2 Wi -Ws W- Wi -Ws|\nAnd the updates are the same as in standard BCFW.\nSi = argmin( ,V(i)f(a)> - argmin \\+ )aATA(S; - w A() b()s\n(w-pwo)T>s(yi)yi(yi)-wp>s(yi)yi(Yi)->s(yi)(yi,Yi Si = argmin Yi EV Yi EV Yi EV )`s(yi)Vi(yi)+ = argmax `S;(yi)(Yi,Yi) W S' Yi EV Yi EV\n)11i\\9i) W s N X+ NX+ dw 1 i,Yi. N\n= argmin f(a +y(s; - a)\n<V(i)f(a),si-Qi> + )A(si-Qj\nAlgorithm 2 BCFW with warm start\nAlgorithm 2 with warm start 1: Let w(0) = wo, Vi E [N], (0) W; 2: Let l(0)=0, ViE[N], 0 3: for k=0...K do 4: Pick i randomly in {1, .., n} 1 1 OH(y*,w(k)) 5: Get y* = argmax H;(yi,w(k)) and ws = NX+ Ow(k) yi EV 6: ls=N(y+yi) (w-ww-x+(li-1s) 7: Y = clipped to [0, 1] l|Wi-Ws||2 (k+1) _ 8: W: (k+1) 9: w(k+1) =w(k) + k+1 10: l(k+1) =l(k) + 11: 12: end for\n6: ls = 1 (y, Yi W: We 7: clipped to [0, 1] Wi -ws 8: 9: 0: w(k+1) 1: 7(k+1)\nNote that the derivation of the Lagrangian dual has introduced a dual variable a;(yi) for each linear constraint of the SVM problem (this can be replaced by a;(h,, (y)) if we consider latent variables). These dual variables indicate the complementary slackness not only for the output class yi, but also for each of the activation which defines a piece of the piecewise linear hinge loss. Therefore a choice of a defines a path of activations."}, {"section_index": "15", "section_name": "E SENSITIVITY OF SGD ALGORITHMS", "section_text": "Here we discuss some weaknesses of the SGD-based algorithms that we have encountered. in practice for our learning objective function. These behaviors have been observed in the case of PL-CNNs, and generally may not appear in different architectures (in particular. the failure to learn with high regularization goes away with the use of batch normalization. layers)."}, {"section_index": "16", "section_name": "E.1 INITIAL LEARNING RATE", "section_text": "As mentioned in the experiments section, the choice of the initial learning rate is critical for good performance of all Adagrad, Adadelta and Adam. When the learning rate is too high the network does not learn anything and the training and validating accuracies are stuck at random level. When it is too low, the network may take a considerably greater number of epochs to converge."}, {"section_index": "17", "section_name": "E.2 FAILURES TO LEARN", "section_text": "Regularization When the regularization hyper-parameter is set to a value of 0.01 or. higher on CIFAR-10, SGD solvers get trapped in a local minimum and fail to learn. The. SGD solvers indeed fall in the local minimum of shutting down all activations on ReLUs. which provide zero-valued feature vector to the SVM loss layer (and a hinge loss of one). As a consequence, no information can be back-propagated. We plot this behavior below:.\n1.4 LW-SVM 1.2 Adagrad 1.0 Adadelta 0.8 Adam 0.6 0.4 0.2 0.0 0.2 0.4 0.6 0.8 1.0 (%) gnennnng reannn 90 80 70 LW-SVM Adagrad 40 Adadelta 30 Adam 20 100 0.0 0.2 0.4 0.6 0.8 (%) eaannnn aaeeanen 1.0 70 60 50 LW-SVM 40 Adagrad 30 Adadelta 20 Adam 10 O 0.0 0.2 0.4 0.6 0.8 1.0\nFigure 8: Behavior of different algorithms for X = O.01. The x-axis has been rescaled t compare the evolution of all algorithms (real training times vary between half an hour to a few hours for the different runs)..\nIn this situation. the network is at a bad saddle point (note that the training and validatior accuracies are stuck at random levels). Our algorithm does not fall into such bad situations however it is not able to get out of it either: each layer is at a pathological critical point ol its own objective function, which makes our algorithm unable to escape from it.\nWith a lower initial learning rate, the evolution is slower, but eventually the solver goes back to the bad situation presented above.\nBiases The same failing behavior as above has been observed when not using the biases in the network. Again our algorithm is robust to this change"}] |
B1Igu2ogg | [{"section_index": "0", "section_name": "1 INTRODUCTION", "section_text": "Text understanding starts with the challenge of finding machine-understandable representation that. captures the semantics of texts. Bag-of-words (BoW) and its N-gram extensions are arguably the most commonly used document representations. Despite its simplicity, BoW works surprisingly well for many tasks (Wang & Manning]2012). However, by treating words and phrases as unique. and discrete symbols, BoW often fails to capture the similarity between words or phrases and alsc suffers from sparsity and high dimensionality..\nRecent works on using neural networks to learn distributed vector representations of words hav gained great popularity. The well celebrated Word2Vec (Mikolov et al.]2013a), by learning tc predict the target word using its neighboring words, maps words of similar meanings to nearb points in the continuous vector space. The surprisingly simple model has succeeded in generatin high-quality word embeddings for tasks such as language modeling, text understanding and machin translation. Word2Vec naturally scales to large datasets thanks to its simple model architecture. I can be trained on billions of words per hour on a single machine.\nParagraph Vectors (Le & Mikolov2014) generalize the idea to learn vector representation for docu- ments. A target word is predicted by the word embeddings of its neighbors in together with a unique document vector learned for each document. It outperforms established document representations. such as BoW and Latent Dirichlet Allocation (Blei et al.]2003), on various text understanding tasks (Dai et al.]2015). However, two caveats come with this approach: 1) the number of parame ters grows with the size of the training corpus, which can easily go to billions; and 2) it is expensive to generate vector representations for unseen documents at test time.\nWe propose an efficient model architecture, referred to as Document Vector through Corruptior (Doc2VecC), to learn vector representations for documents. It is motivated by the observation that linear operations on the word embeddings learned by Word2Vec can sustain substantial amount of syntactic and semantic meanings of a phrase or a sentence (Mikolov et al.2013b). For ex ample, vec(\"Russia') + vec(\"river') is close to vec(\"Volga River') (Mikolov & Dean 2013), and"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "vec(\"king') - vec(\"man') + vec(\"women') is close to vec(\"queen) (Mikolov et al.]2013b). Ir Doc2VecC, we represent each document as a simple average of the word embeddings of all the words in the document. In contrast to existing approaches which post-process learned word em beddings to form document representation (Socher et al.]2013] Mesnil et al.]2014), Doc2VecC enforces a meaningful document representation can be formed by averaging the word embedding. during learning. Furthermore, we include a corruption model that randomly remove words from a document during learning, a mechanism that is critical to the performance and learning speed of ou\nDoc2VecC has several desirable properties: 1. The model complexity of Doc2VecC is decoupled from the size of the training corpus, depending only on the size of the vocabulary; 2. The model architecture of Doc2VecC resembles that of Word2Vec, and can be trained very efficiently; 3. The new framework implicitly introduces a data-dependent regularization, which favors rare or informa- tive words and suppresses words that are common but not discriminative; 4. Vector representation of a document can be generated by simply averaging the learned word embeddings of all the words in the document, which significantly boost test efficiency; 5. The vector representation generated by Doc2VecC matches or beats the state-of-the-art for sentiment analysis, document classification as well as semantic relatedness tasks."}, {"section_index": "2", "section_name": "RELATED WORKS AND NOTATIONS", "section_text": "Text representation learning has been extensively studied. Popular representations range from the simplest BoW and its term-frequency based variants (Salton & Buckley]1988), language mode based methods (Croft & Lafferty]2013) Mikolov et al.]2010f|Kim et al.||2015), topic models (Deer- wester et al.l[1990; Blei et al.[[2003), Denoising Autoencoders and its variants (Vincent et al.] 2008 Chen et al.2012), and distributed vector representations (Mesnil et al.[2014) Le & Mikolov 2014 Kiros et al.2015). Another prominent line of work includes learning task-specific document rep- resentation with deep neural networks, such as CNN (Zhang & LeCun]2015) or LSTM based ap proaches (Tai et al. 2015 Dai & Le| 2015).\nIn this section, we briefly introduce Word2Vec and Paragraph Vectors, the two approaches that are most similar to ours. There are two well-know model architectures used for both methods. referred to as Continuous Bag-of-Words (CBoW) and Skipgram models (Mikolov et al.]2013a). In this work, we focus on CBoW. Extending to Skipgram is straightforward. Here are the notations we are going to use throughout the paper:\nWord2Vec. Word2Vec proposed a neural network architecture of an input layer, a projection layer parameterized by the matrix U and an output layer by VT. It defines the probability of observing the target word wt in a document D given its local context c' as\nThe word vectors are then learned to maximize the log likelihood of observing the target word at each position of the document. Various techniques (Mitchell & Lapata. 201( Zanzotto et al. 2010 Yessenalina & Cardie Grefenstette et al. Socher et al 2013 Kusner et al. 2015 2013\nV: the vocabulary used in the training corpus, of sizes v;. x E Rv1: BoW of a document, where x; = 1 iff word j does appear in the document.. ct E Rux1: BoW of the local context wt-k,.. , wt-1, wt+1, ... , wt+k at the target position t.. c = 1 iff word j appears within the sliding window of the target;. U E Rhxv: the projection matrix from the input space to a hidden space of size h. We use uw to denote the column in U for word w, i.e., the \"input\" vector of word w; VT E Rvxh: the projection matrix from the hidden space to output. Similarly, we use vw to. denote the column in V for word w, i.e., the \"output\" vector of word w.\nUct P(w w'ev exp(v,, Uct\nhave been studied to generate vector representations of documents from word embeddings, among. which the simplest approach is to use weighted average of word embeddings. Similarly, our method forms document representation by averaging word embeddings of all the words in the document.. Differently, as our model encodes the compositionality of words in the learned word embeddings.. heuristic weighting at test time is not required..\n(Uc + d))\nwhere d E D is the vector representation of the document. As we can see from this formula, the complexity of Paragraph Vectors grows with not only the size of the vocabulary, but also the size of the training corpus. While we can reasonably limit the size of a vocabulary to be within a million for most datasets, the size of a training corpus can easily go to billions. What is more concerning is that, in order to come up with the vector representations of unseen documents, we need to perform an expensive inference by appending more columns to D and gradient descent on D while fixing other parameters of the learned model."}, {"section_index": "3", "section_name": "3 METHOD", "section_text": "Several works (Mikolov & Dean2013] Mikolov et al.]2013b) showcased that syntactic and seman tic regularities of phrases and sentences are reasonably well preserved by adding or subtracting worc embeddings learned through Word2Vec. It prompts us to explore the option of simply representing a document as an average of word embeddings. Figure|1|illustrates the new model architecture.\nceremony Wt Average/Concatenate document vector word vectors Wt-1 Wt+1 Wt+2 Wp Wq Wr opening for the performance praised brazil\nFigure 1: A new framework for learning document vectors\nSimilar to Word2Vec or Paragraph Vectors, Doc2VecC consists of an input layer, a projection layer. as well as an output layer to predict the target word, \"ceremony'' in this example. The embeddings o1.. neighboring words (\"opening\", \"for\", \"the') provide local context while the vector representation of. the entire document (shown in grey) serves as the global context. In contrast to Paragraph Vectors.. which directly learns a unique vector for each document, Doc2VecC represents each document as an average of the embeddings of words randomly sampled from the document (\"performance\"' at. position p, \"praised' at position q, and \"brazil'' at position r)..\nHuang et al.(2012) also proposed the idea of using average of word embeddings to represent the global context of a document. Different from their work, we choose to corrupt the original document by randomly removing significant portion of words, and represent the document using only the embeddings of the words remained. This corruption mechanism offers us great speedup during training as it significantly reduces the number of parameters to update in back propagation. At the same time, as we are going to detail in the next section, it introduces a special form of regularization which brings great performance improvement.\nParagraph Vectors. Paragraph Vectors, on the other hands, explicitly learns a document vector with the word embeddings. It introduces another projection matrix D E Rhxn. Each column of D acts as a memory of the global topic of the corresponding document. It then defines the probability of observing the target word wt in a document D given its local context c' as\nHere we describe the stochastic process we used to generate a global context at each update. The global context, which we denote as x, is generated through a unbiased mask-out/drop-out corruption in which we randomly overwrites each dimension of the original document x with probability q. To make the corruption unbiased, we set the uncorrupted dimensions to 1/(1 - q) times its original value. Formally,\nwith probability q X d Xd otherwise\nglobal context local context exp Ux ) T P(wt|ct,x) = (Uct+Ux))\nHere T is the length of the document. Exactly computing the probability is impractical, instead we approximate it with negative sampling (Mikolov et al.|2013a)\nf(w,c,x) log P(wt|c,x) Jc - Jx log w'~P\nn Ti l=-f(wt,ct,xt) i=1 t=1\n1 d Uw T wED\nf(w,c,x) ~ f(w,c,x) +(x-x)'Vxf + (x-x)V?f(x-x\n1 Ep(x|x)[f(w,c,x)] ~ f(w,c,x) +^tr(xV?f\nDoc2VecC then defines the probability of observing a target word wt given its local context ct as well as the global context x as\nhere P, stands for a uniform distribution over the terms in the vocabulary. The two projection matrices U and V are then learned to minimize the loss:\nGiven the learned projection matrix U, we then represent each document simply as an average of the embeddings of the words in the document.\nWe are going to elaborate next why we choose to corrupt the original document with the corruptior model in eq.(1) during learning, and how it enables us to simply use the average word embeddings. as the vector representation for documents at test time.\nwhere xf and V? f are the first-order (i.e., gradient) and second-order (i.e., Hessian) of the log. likelihood with respect to x. Expansion at the mean x is crucial as shown in the following steps... Let us assume that for each instance, we are going to sample the global context x infinitely many. times, and thus compute the expected log likelihood with respect to the corrupted x\nEp(x|x)[f(w,c,x)]~ f(w,c,x) +tr(E[(x-x)(x-x)']V?f\nThe linear term disappears as Ep(x|x)[x x] = 0. We substitute in x for the mean x of the. corrupting distribution (unbiased corruption) and the matrix x = E[(x - x)(x - x) ' 1 for the variance, and obtain\nAs each word in a document is corrupted independently of others, the variance matrix x is simpli fied to a diagonal matrix with th element equals 1ox?. As a result, we only need to compute the. diagonal terms of the Hessian matrix V? f..\nThe jth dimension of the Hessian's diagonal evaluated at the mean x is given by\nd2 -Ow,c,x Ow,c,x ) Ow',c,x Ow',c,x dx? w'~Pv\nn Ti q l=-f(wt,ct,x R(u; i=1 t=1 i=1\nEach f(wt, c, x) in the first term measures the log likelihood of observing the target word w. given its local context c and the document vector d, = Ux. As such, Doc2VecC enforces that a document vector generated by averaging word embeddings can capture the global semantics of the document, and fill in information missed in the local context..\nClosely examining R(u,) leads to several interesting findings: 1. the regularizer penalizes more on the embeddings of common words. A word j that frequently appears across the training corpus. i.e, xij = 1 often, will have a bigger regularization than a rare word; 2. on the other hand, the. regularization is modulated by Ow,c,x(1 - Ow,c,x), which is small if ow,c,x -> 1 or 0. In other. words, if u, is critical to a confident prediction Ow,c,x when it is active, then the regularization is. diminished. Similar effect was observed for dropout training for logistic regression model qWager et al.[2013) and denoising autoencoders (Chen et al.2014)."}, {"section_index": "4", "section_name": "4 EXPERIMENTS", "section_text": "We evaluate Doc2VecC on a sentiment analysis task, a document classification task and a semantic relatedness task, along with several document representation learning algorithms. All experiments. can be reproduced using the code available at https://github.com/mchen24/iclr2017"}, {"section_index": "5", "section_name": "4.1 BASELINES", "section_text": "We compare against the following document representation baselines: bag-of-words (BoW); De. noising Autoencoders (DEA) (Vincent et al.|2oo8), a representation learned from reconstructin original document x using corrupted one x. SDAs have been shown to be the state-of-the-art for sen. timent analysis tasks (Glorot et al.[2011). We used Kullback-Liebler divergence as the reconstruc tion error and an affine encoder. To scale up the algorithm to large vocabulary, we only take into ac. count the non-zero elements of x in the reconstruction error and employed negative sampling for th. remainings; Word2Vec (Mikolov et al.][2013a)+IDF, a representation generated through weighte average of word vectors learned using Word2Vec; Doc2Vec (Le & Mikolov 2014); Skip-though. Vectors(Kiros et al.)2015), a generic, distributed sentence encoder that extends the Word2Vec skip. gram model to sentence level. It has been shown to produce highly generic sentence representation. that apply to various natural language processing tasks. We also include RNNLM (Mikolov et al.. 2010), a recurrent neural network based language model in the comparison. In the semantic related. ness task, we further compare to LSTM-based methods (Tai et al.[2015) that have been reporte on this dataset.\nThe second term here is a data-dependent regularization. The regularization on the embedding u. of each word j takes the following form,.\nn T: R(uj) xx3 Ow',ct,xi i=1 t=1 w'~P\nTable 1: Classification error of a linear classifier trained on various document representations on th Imdb dataset.\nModel Error rate % (include test). Error rate % (exclude test). Bag-of-Words (BOw) 12.53 12.59 RNN-LM 13.59 13.59 Denoising Autoencoders (DEA) 11.58 12.54 Word2Vec + AVG 12.11 12.69 Word2Vec + IDF 11.28 11.92 Paragraph Vectors 10.81 12.10 Skip-thought Vectors. 17.42 Doc2VecC 10.48 11.70\nFor sentiment analysis, we use the IMDB movie review dataset. It contains 100,o00 movies reviews. categorized as either positive or negative. It comes with predefined train/test split (Maas et al. 2011): 25,000 reviews are used for training, 25,000 for testing, and the rest as unlabeled data. The. two classes are balanced in the training and testing sets. We remove words that appear less than 10. times in the training set, resulting in a vocabulary of 43,375 distinct words and symbols..\nSetup. We test the various representation learning algorithms under two settings: one follows the same protocol proposed in (Mesnil et al.|2014), where representation is learned using all the avail- able data, including the test set; another one where the representation is learned using training and unlabeled set only. For both settings, a linear support vector machine (SVM) (Fan et al.]2008). is trained afterwards on the learned representation for classification. For Skip-thought Vectors, we used the generic mode||trained on a much bigger book corpus to encode the documents. A vector of. 4800 dimensions, first 2400 from the uni-skip model, and the last 2400 from the bi-skip model, are generated for each document. In comparison, all the other algorithms produce a vector representa- tion of size 100. The supervised RNN-LM is learned on the training set only. The hyper-parameters are tuned on a validation set subsampled from the training set..\nTime. Table2|summarizes the time required by these algorithms to learn and generate the document representation. Word2Vec is the fastest one to train. Denoising Autoencoders and Doc2VecC second that. The number of parameters that needs to be back-propagated in each update was increased by the number of surviving words in x. We found that both models are not sensitive to the corruption rate q in the noise model. Since the learning time decreases with higher corruption rate, we used q = 0.9 throughout the experiments. Paragraph Vectors takes longer time to train as there are more parameters (linear to the number of document in the learning set) to learn. At test time Word2Vec+IDF, DEA and Doc2VecC all use (weighted) averaging of word embeddings as document\navailable at https://github.com/ryankiros/skip-thoughts\nAccuracy. Comparing the two columns in Table [1] we can see that all the representation learn ing algorithms benefits from including the testing data during the representation learning phrase Doc2VecC achieved similar or even better performance than Paragraph Vectors. Both methods outperforms the other baselines, beating the BOw representation by 15%. In comparison with Word2Vec+IDF, which applies post-processing on learned word embeddings to form document rep- resentation, Doc2VecC naturally enforces document semantics to be captured by averaged word embeddings during training. This leads to better performance. Doc2VecC reduces to Denoising Au- toencoders (DEA) if the local context words are removed from the paradigm shown in Figure|1 By including the context words, Doc2VecC allows the document vector to focus more on capturing the global context. Skip-thought vectors perform surprisingly poor on this dataset comparing to other methods. We hypothesized that it is due to the length of paragraphs in this dataset. The average length of paragraphs in the IMDB movie review dataset is 296.5, much longer than the ones used for training and testing in the original paper, which is in the order of 10. As noted in (Tai et al. 2015), the performance of LSTM based method (similarly, the gated RNN used in Skip-thought vectors) drops significantly with increasing paragraph length, as it is hard to preserve state over long sequences of words.\nTable 2: Learning time and representation generation time required by different representation learn ing algorithms.\nModel Learning time. Generation time. Denoising Autoencoders 3m 23s 7s Word2Vec + IDF 2m 33s 7s Paragraph Vectors 4m 54s 4m 17s Skip-thought 2h 2h Doc2VecC 4m 30s 7s\nTable 3: Words with embeddings closest to O learned by different algorithms\nrepresentation. Paragraph Vectors, on the other hand, requires another round of inference to produce. the vector representation of unseen test documents. It takes Paragraph Vectors 4 minutes and 17. seconds to infer the vector representations for the 25,000 test documents, in comparison to 7 seconds for the other methods. As we did not re-train the Skip-thought vector models on this dataset, the. training timd?[reported in the table is the time it takes to generate the embeddings for the 25,000. training documents. Due to repeated high-dimensional matrix operations required for encoding long. paragraphs, it takes fairly long time to generate the representations for these documents. Similarly for testing. The experiments were conducted on a desktop with Intel i7 2.2Ghz cpu..\nData dependent regularization. As explained in Section 3.1] the corruption introduced in. Doc2VecC acts as a data-dependent regularization that suppresses the embeddings of frequent but. uninformative words. Here we conduct an experiment to exam the effect. We used a cutoff of 100 in this experiment. Table3 lists the words having the smallest l2 norm of embeddings found by. different algorithms. The number inside the parenthesis after each word is the number of times this word appears in the learning set. In word2Vec or Paragraph Vectors, the least frequent words have embeddings that are close to zero, despite some of them being indicative of sentiment such as deba- cle, bliss and shabby. In contrast, Doc2VecC manages to clamp down the representation of words frequently appear in the training set, but are uninformative, such as symbols and stop words..\nSubsampling frequent words. Note that for all the numbers reported, we applied the trick of. subsampling of frequent words introduced in (Mikolov & Dean2013) to counter the imbalance. between frequent and rare words. It is critical to the performance of simple Word2Vec+AVG as the sole remedy to diminish the contribution of common words in the final document representation. If. we were to remove this step, the error rate of Word2Vec+AVG will increases from 12.1% to 13.2%. Doc2VecC on the other hand naturally exerts a stronger regularization toward embeddings of words. that are frequent but uninformative, therefore does not rely on this trick.."}, {"section_index": "6", "section_name": "4.3 WORD ANALOGY", "section_text": "In table3] we demonstrated that the corruption model introduced in Doc2VecC dampens the embed dings of words which are common and non-discriminative (stop words). In this experiment, we are going to quantatively compare the word embeddings generated by Doc2VecC to the ones generatec by Word2Vec, or Paragraph Vectors on the word analogy task introduced by[Mikolov et al.(2013a) The dataset contains five types of semantic questions, and nine types of syntactic questions, with a total of 8,869 semantic and 10,675 syntactic questions. The questions are answered through simple linear algebraic operations on the word embeddings generated by different methods. Please refer to the original paper for more details on the evaluation protocol.\n2As reported in the original paper, training of the skip-thought vector model on the book corpus datase takes around 2 weeks on GPU..\nd2Vec harp(118) distasteful(115) switzerland(101) shabby(103) fireworks(101) heav- ens(100) thornton(108) endeavor(100) dense(108) circumstance(119) debacle(103) Vectors harp(118) dense(108) reels(115) fireworks(101) its'(103) unnoticed(112) pony(102) fulfilled(107) heavens(100) bliss(110) canned(114) shabby(103) debacle(103) 2VecC (1099319) .(1306691) the(1340408) of(581667) and(651119) up(49871) to(537570) that(275240) time(48205) endeavor(100) here(21118) way(31302) own(13456)\nParagraphVectors Word2Vec Doc2VecC ParagraphVectors Word2Vec Doc2VecC 60 60 58.2 52.6 50. 48. 46.7 42.5 44.1 42 (%) 40 38. 40 36.4 36. 3434.1 32.7 26.28.1 23?4.3 18.30.3 20 20 13. 9.1 10.9 8.3 10.2 10.2 6.1 7.5 3.8 5.1 0 1M 2M 4M 8M 15M 1M 2M 4M 8M 15M Number of paragraphs used for learning. Number of paragraphs used for learning. (a) h=50 (b) h=100\nFigure 2: Accuracy on subset of the Semantic-Syntactic Word Relationship test set. Only questions containing words from the most frequent 30k words are included in the test..\nTable 4: Top 1 accuracy on the 5 type of semantics and 9 types of syntactic questions\nWe observe similar trends as in Mikolov et al.(2013a). Increasing embedding dimensionality as. well as training data size improves performance of the word embeddings on this task. However, the. improvement is diminishing. Doc2VecC produces word embeddings which performs significantly better than the ones generated by Word2Vec. We observe close to 20% uplift when we train on the. full training corpus. Paragraph vectors on the other hand performs surprisingly bad on this dataset. Our hypothesis is that due to the large capacity of the model architecture, Paragraph Vectors relies. mostly on the unique document vectors to capture the information in a text document instead of. learning the word semantic or syntactic similarities. This also explains why the PV-DBOw Le &. Mikolov(2014) model architecture proposed in the original work, which completely removes worc. embedding layers, performs comparable to the distributed memory version.."}, {"section_index": "7", "section_name": "4.4 DOCUMENT CLASSIFICATION", "section_text": "For the document classification task, we use a subset of the wikipedia dump, which contains over 300,o00 wikipedia pages in 100 categories. The 100 categories includes categories under sports\nSemantic questions Word2Vec Doc2VecC Syntactic questions Word2Vec Doc2VecC capital-common-countries 73.59 81.82 gram1-adjective-to-adverb 19.25 20.32 capital-world 67.94 77.96 gram2-opposite 14.07 25.54 currency 17.14 12.86 gram3-comparative 60.21 74.47 city-in-state 34.49 42.86 gram4-superlative 52.87 55.40 family 68.71 64.62 gram5-present-participle 56.34 65.81 gram6-nationality-adjective 88.71 91.03 gram7-past-tense 47.05 51.86 gram8-plural 50.28 61.27 gram9-plural-verbs 25.38 39.69\ngram7-past-tense gram8-plural gram9-plural-verbs\nWe trained the word embeddings of different methods using the English news dataset released under the ACL workshop on statistical machine translation. The training set includes close to 15M para. graphs with 355M tokens. We compare the performance of word embeddings trained by different. methods with increasing embedding dimensionality as well as increasing training data..\nIn table 5, we list a detailed comparison of the performance of word embeddings generated by. Word2Vec and Doc2VecC on the 14 subtasks, when trained on the full dataset with embedding of size 100. We can see that Doc2VecC significantly outperforms the word embeddings produced by. Word2Vec across almost all the subtasks.\nTable 5: Classification error (%) of a linear classifier trained on various document representations on the Wikipedia dataset.\nFigure 3: Visualization of document vectors on Wikipedia dataset using t-SNE\nentertainment, literature, and politics etc. Examples of categories include American drama films. Directorial debut films, Major League Baseball pitchers and Sydney Swans players. Body texts (the second paragraph) were extracted for each page as a document. For each category, we select 1,000 documents with unique category label, and 100 documents were used for training and 900 documents for testing. The remaining documents are used as unlabeled data. The 100 classes are balanced in the training and testing sets. For this data set, we learn the word embedding and document representation for all the algorithms using all the available data. We apply a cutoff of 10, resulting in a vocabulary of size 107, 691.\nTable [5|summarizes the classification error of a linear SVM trained on representations of different sizes. We can see that most of the algorithms are not sensitive to the size of the vector represen- tation. Doc2Vec benefits most from increasing representation size. Across all sizes of representa- tions, Doc2VecC outperform the existing algorithms by a significant margin. In fact, Doc2VecC can achieve same or better performance with a much smaller representation vector.\nFigure 3 visualizes the document representa- tions learned by Doc2Vec (left) and Doc2VecC. right) using t-SNE (Maaten & Hinton. 2008) We can see that documents from the same cat- egory are nicely clustered using the representa-. tion generated by Doc2VecC. Doc2Vec, on the. other hand, does not produce a clear separation. between different categories, which explains its. worse performance reported in Table|5\nFigure 4 visualizes the vector representation. generated by Doc2VecC w.r.t. coarser catego-. rization. we manually grouped the 100 cate- gories into 7 coarse categories, television, al-. bums, writers, musicians, athletes, species and. actors. Categories that do no belong to any of these 7 groups are not included in the figure..\nModel BOW DEA Word2Vec + AVG Word2 Vec + IDF Paragraph Vectors Doc2 Vec h = 100 36.03 32.30 33.2 33.16 35.78 31.92 h = 200 36.03 31.36 32.46 32.48 34.92 30.84 h = 500 36.03 31.10 32.02 32.13 33.93 30.43 h = 1000 36.03 31.13 31.78 32.06 33.02 30.24 30 40 30 20 20 10 10 0 0 10 10 20 -20 30 -30 40 -30 -20 10 0 10 20 30 -40 -30 -20 -10 0 10 20 30 40 (a) Doc2Vec (b) Doc2VecC\n40 television albums 30 writers musicians athletes 20 species actors/actresses 10 0 -10 -20 -30 -40 -40 -30 -20 -10 0 10 20 30 40\ntelevision albums 30 writers musicians athletes 20 species actors/actresses 10 0 -10 -20 -30 -40 40 -30 -20 -10 0 10 20 30 40\nFigure 4: Visualization of Wikipedia Doc2Vec( vectors using t-SNE\nWe can see that documents belonging to a coarser category are grouped together. This subset in. cludes is a wide range of sports descriptions, ranging from football, crickets, baseball, and cycling. etc., which explains why the athletes category are less concentrated. In the projection, we can see documents belonging to the musician category are closer to those belonging to albums category thar those of athletes or species."}, {"section_index": "8", "section_name": "4.5 SEMANTIC RELATEDNESS", "section_text": "We test Doc2VecC on the SemEval 2014 Task 1: semantic relatedness SICK dataset (Marelli et al.. 2014). Given two sentences, the task is to determine how closely they are semantically related. The set contains 9,927 pairs of sentences with human annotated relatedness score, ranging from 1 to 5 A score of 1 indicates that the two sentences are not related, while 5 indicates high relatedness. The set is splitted into a training set of 4,500 instances, a validation set of 500, and a test set of 4,927..\nWe compare Doc2VecC with several winning solutions of the competition as well as several more. recent techniques reported on this dataset, including bi-directional LSTM and Tree-LSTM3|trained from scratch on this dataset, Skip-thought vectors learned a large book corpus[4(Zhu et al.2015) and produced sentence embeddings of 4,800 dimensions on this dataset. We follow the same proto-. col as in skip-thought vectors, and train Doc2VecC on the larger book corpus dataset. Contrary to. the vocabulary expansion technique used in (Kiros et al.|2015) to handle out-of-vocabulary words . we extend the vocabulary of the learned model directly on the target dataset in the following way:. we use the pre-trained word embedding as an initialization, and fine-tune the word and sentence. representation on the SICK dataset. Notice that the fine-tuning is done for sentence representation. learning only, and we did not use the relatedness score in the learning. This step brings small im-. provement to the performance of our algorithm. Given the sentence embeddings, we used the exact. same training and testing protocol as in (Kiros et al.2015) to score each pair of sentences: with. two sentence embedding u1 and u2, we concatenate their component-wise product, u1 : u2 and their. absolute difference, uj - u2 as the feature representation.."}, {"section_index": "9", "section_name": "5 CONCLUSION", "section_text": "We introduce a new model architecture Doc2VecC for document representation learning. It is very efficient to train and test thanks to its simple model architecture. Doc2VecC intrinsically makes sure document representation generated by averaging word embeddings capture semantics of document during learning. It also introduces a data-dependent regularization which favors informative or rare words while dampening the embeddings of common and non-discriminative words. As such, each document can be efficiently represented as a simple average of the learned word embeddings. Ir comparison to several existing document representation learning algorithms, Doc2VecC outperforms not only in testing efficiency, but also in the expressiveness of the generated representations.\nTable [6 summarizes the performance of various algorithms on this dataset. Despite its simplicity, Doc2VecC significantly out-performs the winning solutions of the competition, which are heavily feature engineered toward this dataset and several baseline methods, noticeably the dependency-tree RNNs introduced in (Socher et al.[[2014), which relies on expensive dependency parsers to compose sentence vectors from word embeddings. The performance of Doc2VecC is slightly worse than the LSTM based methods or skip-thought vectors on this dataset, while it significantly outperforms skip-thought vectors on the IMDB movie review dataset (11.70% error rate vs 17.42%). As we hypothesized in previous section, while Doc2VecC is better at handling longer paragraphs, LSTM- based methods are superior for relatively short sentences (of length in the order of 10s). We would like to point out that Doc2VecC is much faster to train and test comparing to skip-thought vectors. It takes less than 2 hours to learn the embeddings on the large book corpus for Doc2VecC on a desktop with Intel i7 2.2Ghz cpu, in comparison to the 2 weeks on GPU required by skip-thought vectors.\nTable 6: Test set results on the SICK semantic relatedness task. The first group of results are from the submission to the 2014 SemEval competition; the second group includes several baseline methods reported in (Tai et al.|2015); the third group are methods based on LSTM reported in (Tai et al. 2015) as well as the skip-thought vectors (Kiros et al.2015).\nMethod Pearson's Spearman's p MSE Illinois-LH 0.7993 0.7538 0.3692 UNAL-NLP 0.8070 0.7489 0.3550 Meaning Factory 0.8268 0.7721 0.3224 ECNU 0.8279 0.7689 0.3250 Mean vectors (Word2 Vec + avg) 0.7577 0.6738 0.4557 DT-RNN (Socher et al.]2014) 0.7923 0.7319 0.3822 SDT-RNN (Socher et aI.2014 0.7900 0.7304 0.3848 LSTM (Tai et al.T2015) 0.8528 0.7911 0.2831 Bidirectional LSTM (Tai et al. 2015 0.8567 0.7966 0.2736 Dependency Tree-LSTM) (Tai et aI.||2015 0.8676 0.8083 0.2532 combine-skip (Kiros et al.l|2015 0.8584 0.7916 0.2687 Doc2 VecC 0.8381 0.7621 0.3053"}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Minmin Chen, Zhixiang Xu, Kilian Weinberger, and Fei Sha. Marginalized denoising autoencoders for domain adaptation. arXiv preprint arXiv:1206.4683, 2012.\nAndrew M Dai and Quoc V Le. Semi-supervised sequence learning. In Advances in Neural Infor mation Processing Systems, pp. 3079-3087, 2015\nAndrew M Dai, Christopher Olah, and Quoc V Le. Document embedding with paragraph vectors arXiv preprint arXiv:1507.07998. 2015.\nScott Deerwester, Susan T Dumais, George W Furnas, Thomas K Landauer, and Richard Harshman. Indexing by latent semantic analysis. Journal of the American society for information science, 4. (6):391, 1990.\nRong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. Liblinear: A library for large linear classification. JMLR, 9(Aug):1871-1874, 2008\nXavier Glorot. Antoine Bordes, and Yoshua Bengio. Domain adaptation for large-scale sentiment classification: A deep learning approach. In ICML, pp. 513-520, 2011.\nEric H Huang, Richard Socher, Christopher D Manning, and Andrew Y Ng. Improving word repre sentations via global context and multiple word prototypes. In ACL, pp. 873-882, 2012\nYoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. Character-aware neural language models. arXiv preprint arXiv:1508.06615, 2015.\nRyan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Tor ralba, and Sanja Fidler. Skip-thought vectors. In Advances in neural information processing. systems, pp. 3294-3302, 2015.\nDavid M Blei, Andrew Y Ng, and Michael I Jordan. Latent dirichlet allocation. Journal of machine Learning research, 3(Jan):993-1022, 2003\nMinmin Chen, Kilian Q Weinberger, Fei Sha, and Yoshua Bengio. Marginalized denoising auto encoders for nonlinear representations. In ICML, pp. 1476-1484, 2014.\nAndrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christophe. Potts. Learning word vectors for sentiment analysis. In ACL, pp. 142-150, 2011..\nLaurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 9(Nov):2579-2605, 2008\nMarco Marelli, Luisa Bentivogli, Marco Baroni, Raffaella Bernardi, Stefano Menini, and Roberto Zamparelli. Semeval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. SemEval-2014, 2014.\nTomas Mikolov, Martin Karafiat, Lukas Burget, Jan Cernocky, and Sanjeev Khudanpur. Recurren neural network based language model. In Interspeech, volume 2, pp. 3, 2010.\nTomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. Linguistic regularities in continuous space word representations. In HLT-NAACL, volume 13, pp. 746-751, 2013b.\nJeff Mitchell and Mirella Lapata. Composition in distributional models of semantics. Cognitiv science, 34(8):1388-1429, 2010.\nGerard Salton and Christopher Buckley. Term-weighting approaches in automatic text retrieval Information processing & management, 24(5):513-523, 1988.\nRichard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng. and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP, volume 1631, pp. 1642, 2013.\nRichard Socher, Andrej Karpathy, Quoc V Le, Christopher D Manning, and Andrew Y Ng Grounded compositional semantics for finding and describing images with sentences. Trans actions of the Association for Computational Linguistics, 2:207-218, 2014.\nKai Sheng Tai, Richard Socher, and Christopher D Manning. Improved semantic representations. from tree-structured long short-term memory networks. arXiv preprint arXiv:1503.00075, 2015\nLaurens Van Der Maaten, Minmin Chen, Stephen Tyree, and Kilian Q Weinberger. Learning with marginalized corrupted features. In 1CML (1), pp. 410-418, 2013.\nPascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting anc composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning, pp. 1096-1103. ACM, 2008.\nStefan Wager, Sida Wang, and Percy S Liang. Dropout training as adaptive regularization. In Advances in neural information processing systems, pp. 351-359, 2013\nSida Wang and Christopher D Manning. Baselines and bigrams: Simple, good sentiment and topic classification. In Proceedings of the 5Oth Annual Meeting of the Association for Computational Linguistics: Short Papers- Volume 2. pp. 90-94. Association for Computational Linguistics. 2012\nAinur Yessenalina and Claire Cardie. Compositional matrix-space models for sentiment analysis In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp 172-182. Association for Computational Linguistics, 2011.\nXiang Zhang and Yann LeCun. Text understanding from scratch. arXiv preprint arXiv:1502.01710 2015.\nYukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba and Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watching. movies and reading books. In arXiv preprint arXiv:1506.06724, 2015.."}] |
ryaFG5ige | [{"section_index": "0", "section_name": "INTRODUCING ACTIVE LEARNING FOR CNN UNDER THE LIGHT OF VARIATIONAL INFERENCE", "section_text": "Melanie Ducoffe & Frederic Precioso\n{ducoffe, precioso}@i3s.unice.fr\nOne main concern of the deep learning community is to increase the capacity of representation of deep networks by increasing their depth. This requires to scale up the size of the training database accordingly. Indeed a major intuition lies in the fact that the depth of the network and the size of the training set are strongly correlated. However recent works tend to show that deep learning may be handled with smaller dataset as long as the training samples are carefully selected (let us mention for instance curriculum learning). In this context we introduce a scalable and efficient active learning method that can be applied to most neural networks especially Convolutional Neural Networks (CNN). To the best of our knowledge this paper is the first of its kind to design an active learning selection scheme based on a variational inference for neural networks. We also deduced a formulation of the posterior and prior distributions of the weights using statistical knowledge on the Maximum Likelihood Estimator. We describe our strategy to come up with our active learning criterion. We assess its consistency by checkino the ctivelearning stens"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "We refer to active learning Cohn (1994) as the field of machine learning which targets building iteratively the annotated training set with the help of an oracle.\nIn this setting and in a context of pool-based active learning'] a model is trained on a small amoun of data (i.e. the initial training set) and a scoring function discriminates samples which should be labeled by the oracle from the ones which do not hold new information for the model. The queriec samples are then submitted to an oracle (which can be another decision algorithm for instance ir co-training context, or a human expert in interactive learning context) to be labeled. They are ther added to the current training set. Finally the model is retrained from scratch. This process is repeatec recursively to grow the training set.\nAlthough active learning and deep learning represent two important pillars of machine learning, they. have mainly coexisted into independent stream of works owing to the complexity of combining. them. The main issues are the scalability and the adaptability of common active learning schemes when considering architectures with a huge number of parameters such as deep networks. Another issue lies in the overall number of training iterations since training a deep architecture remains a computationally expensive process despite all the optimizations through GPU processing. This specificity has prevented deep learning from being prevalent within active learning. Indeed seminal. active learning frameworks Cohn (1994) have mainly focused on adding one sample at-a-time. When it comes to selecting a batch of queries, the most intuitive solution is to select top scoring samples"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We describe our strategy to come up with our active learning criterion. We assess its. consistency by checking the accuracy obtained by successive active learning steps on two benchmark datasets MNIST and USPS. We also demonstrate its scalability towards increasing training set size..\nSuch a solution is immediate in the process but fails to model the correlations between samples Labeling one sample at-a-time may therefore lead to the labeling of another sample totally useless.\nIn our work, batches of actively selected samples are added at each training iteration. We propose a batch active learning framework designed for deep architectures, especially Deep Convolutional Neural Networks (CNN).\nBatch active learning is highly suitable for deep networks which are trained on minibatches of data. at each iterations. Indeed training with minibatches help the training of deep networks, and we empirically noticed that the size of the minibatch is a major hyperparameter. Thus it makes sense to. query a batch of unlabelled data whose size would be proportionnal to the size of a minibatch. In our. work, batches have the same size as the minibatches but they could be decorrelated by considering. importance sampling technigues.\nOur model focuses on log loss which is involved in the training process of most neural networks. T achieve the required scalability of active learning for deep architectures, we step away from traditiona active learning methods and focus our attention on a more general setting: Maximum Likelihooc Estimation (MLE) and Bayesian inference. Provided certain assumptions, our active selection relie on a criterion which is based on Fisher information and is obtained from the minimization of stochastic method for variational inference. Our active selection relies on the Fisher matrices oi the unlabeled data and on the data selected by the active learning step. An approximation of Fishe information, based on a diagonal Kronecker-block decomposition makes our criterion computationall affordable for an active greedy selection scheme.\nVariational methods have been previously explored like in Graves(2011) as a tractable approximation to Bayesian inference for Artificial Neural Networks (ANNs). One advantage of such a representation is that it leads to a two-term active learning criterion: one related to the prediction accuracy on the observed data and the second term expressing the model complexity. Such a two-fold criterion is, to the best of our knowledge, the first of the kind scalable for active learning. Such an expression may help both to analyze and to optimize ANNs, not only in an active learning framework but also for curriculum learning.\nActive learning is a framework to automatize the selection of instances to be labeled in a learning. process. Active learning offers a variety of strategies where the learner actively selects which samples. seem \"optimal'' to annotate, so as to reduce the size of the labeled training set required to achieve equivalent performance\nWe consider the context of pool-based active learning where the learner selects its queries among a given unlabeled data set. For other variants (query synthesis, (stream-based) selective sampling) we. refer the reader toSettles(2012)\nWhen it comes to pool-based active learning, the most intuitive approaches focus on minimizing. some error on the target classifier. Uncertainty sampling minimizes the training error by querying. unlabeled data on which the current classifier (i.e. from previous training iteration) is assigning. labels with the weakest confidence. This method, uncertainty sampling, while being the least. computational consuming among all active learning techniques has the main drawback of ignoring much of the output distribution classes, and is prone to querying outliers. Thanks to its low cost. and easy setup, uncertainty has been adapted to deep architectures for sentiment classificationZhou. et al.(2010). However, deep architectures are subject to adversarial examples, a type of noise we. suspect uncertainty selection to be highly sensitive toSzegedy et al.(2014);Goodfellow et al.(2015 Other strategies (expected error reduction, expected output variance reduction) directly minimize the\nWe dedicate section Related works to the presentation of active learning literature. Section Covering. presents the theoretical aspects of our active learning framework, while section Active learning as a greedy selection scheme details our greedy algorithm. We then assess our active learning method. through experiments on MNIST and USPS benchmarks. We discuss possible improvements of our. approach and connections with previous MLE-based active learning methods before concluding..\nerror on a validation set after querying a new unlabeled sample. However they are computationall expensive especially when considering neural networks..\nTraditional active learning techniques handle selection of one sample at-a-time. One of the mair drawbacks of the aforementioned active learning techniques is that is does not pay attention to the information held in unlabeled data besides considering it as potential queries. Hence once the strategy for selecting samples to be labeled and added to the training set is defined, the question on the impac of the possible correlation between successive selected samples remains.\nTo that end, one recent class of methods deals with the selection of a batch of samples during. the active process, batch mode active learning. Batch mode active learning selects a set of mos. informative unlabeled sample instead of a unique sample. Such a strategy is highly suitable wher retraining the model is not immediate and require to restart the training from scratch at each iteratior as it is the case for neural networks. A simple strategy (whose has also been used for previous deej. active learning strategyZhou et al.(2010)) is to select a batch of top scoring instances. Howeve that strategy fails to consider the correlation among pairs of samples. The redundancy between the. so-selected samples may therefore hinder the learning process.\nIn the context of a deep learning scenario, if several elements related to the same direction of the gradient are in the same minibatch, the gradient descent in the next learning step may lead at once too close to a local minimum, diverting the process away from the global minimum.\nWhile one sample at-a-time can prevent from being misled that way, it gets prohibitive when considering big data because the number of iterations is equal to the number of learning samples unlike the batch-based strategies..\nRecently some solutions have been proposed for choosing an appropriate subset of samples so a to minimize any significant loss in performance. Those methods consider the minimization of th Kullback-Leibler (KL) divergence between the resampled distribution of any possible subset selecte for the active process and the whole unlabeled data set distribution. A lower bound of the negativ of this KL divergence is then defined and rewritten as a submodular function. The minimization o the initial KL divergence becomes then a problem of submodular maximization|Hoi et al.(2006). I Wei et al.(2015), Wei et al have designed several submodular functions so as to answer at best the need of specific classifiers (Naive Bayes Classifiers, Logistic Regression Classifier, Nearest Neighbo Classifier). However, their approach is hardly scalable to handle all the information from non-shallov classifiers such as deep networks.\nP(A|w)P(wa) F(a,) = -Ew~Q() ln Q(w|B)\nF(a,) = Ew~2()(L(A;w)) + KL(Q(w) l| P(a))\nAnother solution to minimize the correlation among a set of queries is to perform bayesian inference. on the weights of a neural network. In a bayesian context, a neural network is considered as a. oarametric model which assigns a conditional probability on the observed labeled data A given a set of weights w. The weights follow some prior distribution P(a) depending on the parameter a and the posterior distribution of the weights P(w A, a) is deduced. The goal is thus to maximize the. posterior probability of the weights on the observed data A. Indeed bayesian inference expresses. the uncertainty of the weights which consequently leads to a relevant exploration of the underlying. distribution of the input data X. When it comes to active learning, the learner needs not only to. estimate the posterior given the observed data A but also to consider the impact of new data on that. oosterior Golovin et al.(2010). In our context, bayesian inference is intractable, partially due to the nigh number of weights involved in a deep network. To solve this issue,Graves[(2011) introduced. a variational approximation to perform bayesian inference on neural networks. Specifically he. approximates the posterior distribution P(w A, a) with a tractable distribution of his choice. Q(w 3) depending on a new parameter . The quality of the approximation Q(w 3) compared. to the true posterior P(w A, a) is measured by the variational free energy F with respect to the. parameters and . F has no upper bound but gets closer to zero as both distributions become more. and more similar.\nUnder certain assumptions on the family distribution for the posterior and prior of the weights diagonal gaussian ...), Graves proposed a backpropagation compatible algorithm to train an ensemble of networks, whose weights are sampled from a shared probability distribution..\nThe primary purpose of the variational free energy is to propose a new training objectif for neural. network by learning a and . In an active learning context, the main drawback of variationa free energy based method is that it requires to update by backpropagation the parameters for each. unlabeled data submitted as a query. However we know from statistical assumption on the maximum likelihood the posterior and prior distribution of trained weights given the current labeled training. set: if and only if we consider trained networks, we know how to build a and in a unique iteratior without backpropagation. This knowledge helps us to extend Graves first objectives to the use of. variational free energy to measure how new observations affect the posterior..\n3.1 VARIATIONAL INFERENCE ON NEURAL NETWORK WITH GAUSSIAN POSTERIOR WEIGHT DISTRIBUTION\nAs done for the majority of neural networks, we measure the error of the weights w by the negative log likelihood on an observed set of annotated data A:\nWhen assuming that an arbitrary parameter W* is governing the data generation process, we knov. that the expected negative log likelihood is lower bounded by the expected negative log likelihood o the true parameter W* governing the data generation process. What it means is that no distributiol describes the data as well as the true distribution that generated it. It turns out that, under certair. assumptions, we can prove using the central limit theorem that the MLE is asymptotically norma with a mean equal to the true parameter value and variance equal to the inverse of the expected Fishe information evaluated at the true parameter.\nIf we denote by X the underlying data distribution, Wx the MLE and W* the true parameter, we know that Wx is a sample from a multivariate gaussian distribution parametrized by W*. Note that in this context we assume the Fisher matrices are invertible.\nWx ~ N(W*,Ix'(W*)) Wy ~ N(W*,I'(W+)\nWe thus notice than in an active learning context, the learner is trained on data uniformly sampled. from Y. while the optimal solution would be when training on data uniformly sampled from X\nL(A;w) =- ln(P(yx,w) (x,y)EA\nWe consider the Maximum Likelihood Estimator (MLE) W as the value which makes the data observed A as likely as possible for a fixed architecture. Note than even for a fixed A in the case of neural network, W may not be unique\nW = argminwL(A; w\nWx ~ N(W*,Ix'(W*))\nHowever the expected Fisher information on the underlying distribution is intractable. Eventually. using the law of large numbers, we know that the observed Fisher information converges to the. expected Fisher information as the sample size increases. Another theoretical limitation is that the. true parameter is unknown but we can approximate its observed Fisher information with the observed Fisher information at the MLE because of the consistency of the MLE. For a sake of simplicity we. keep the same notation for observed and expected Fisher matrix..\nLet denote by Y the random variable after resampling the underlying distribution X using an active learning strategy. W*, W* are the true parameters with respect to their respective data distributions and their respective MLE variables Wx, Wy, then the following relations hold:\nThe asymptotic behaviour provides us with a prior distribution of the weights based on the data distribution X. In our context of active learning, we approximate the posterior distribution with the MLE distribution induced by the resampling Y. Hence we define a prior and posterior distributior which did not require to be learnt by backpropagation directly but depend on the two data distributior X and Y.\nOur active learning scheme relies on the selection of input data whose induced MLE distribution Q(y) is minimizing the variational free energy.\nIt consists in the minimization of the sum of two terms which we denote respectively by the training. factor Ew~Q()((A; W)) and the generalization factor KL(Q() I| P(a)). It is possible to analyze both terms independently and explain their role into the minimization:\nIf the expression of the variational free energy provides us a theoretical context to work on, the usage. of Fisher matrices of deep networks renders it computationally unaffordable. Indeed the Fisher matrix. is a quadratic matrix in terms of the number of parameters of the deep networks. Because of the huge number of parameters involved, such matrices takes a lot of memory and processing them costs a lot. of ressources, especially if the operations may be repeated often in the framework (as it would be the. case for every possible auery processed by an active learning scheme.).\nP() = P(ax) = N(W*,Ix'(W*)) Q() = Q(Y) = N(W*,Iy'(W*)\nY = argminy F(ax,y)\nTraining factor: Ideally, the Cramer Rao bound implies that the minimum on the training factor is reached when Q() matches the asymptotically most efficient estimator of the opti mal parameter on the error loss on the observed data. Hence the training factor corresponds to the minimization of the error on the observed data A. Generalization factor: Empirical results [Choromanska et al.[(2015) tend to show tha the variance of the accuracy diminishes as the depth of the classifier increases. So oui ultimate goal would be to converge to any set of parameters of the MLE distribution as theii effectiveness is similar. The goal of the generalization factor is to converge to the asymptotic distribution on the whole input distribution X and to minimize the error of prediction of nev input data.\nThe next section|3.2|explains the different approximation proposed to deduce a more friendly user criterion.\nBased on their decomposition definition, we define the evaluation of blocks of the Fisher information at a certain point x;(x,l, Tx,t) and an empirical estimation of the Fisher matrix on a set of data A. A sum up of their decomposition is presented in Eq. (9) while the exact content of the kronecker\nblocks w and is left as undescribed in this p per for the sake of concision.\nIA(W) = diag([yA,t(W) TA,(W)]=1 1 xi EA 1 A. A xi EA\nThe strength of this decomposition lies in the properties of block diagonal combined with those of the kronecker product. and - are respectively related to the covariance matrix of the activation and the covariance of the derivative given the input of a layer. Recent deep architectures tend to prevail the depth over the width (i.e. the number of input and output neurons) so this expression becomes really suitable and tractable."}, {"section_index": "3", "section_name": "3.3 APPROXIMATION OF THE TRAINING FACTOR", "section_text": "Despite the block kronecker product approximation of the Fisher matrix, sampling on Q() requires to compute the inverse. Because the kronecker blocks may still have an important number of parameters involved (especially the first fully connected layer suceeding to a convolutional layer), the inverse of the blocks may be still too computationally expensive. To approximate the training factor we opt for a second order approximation of the log likelihood for parameters W close to the mean parameter W* of Q().\naL(A;W*) (W - W+) L(A; W) ~ L(A; W*) + aw aw'aw\nOur first approximation consists in assuming that the MLE parameter w of the currently trained network is a good approximator of W*. Because the network has converged on the current set of. observed data A the first derivative of the log likelihood is also set to zero. Hence Eq. (10) thus. becomes:\nL(A;W) ~ L(A;w) +(W-w)I'(w)(W-w\nTo compute the expectation over the range of weights sampled from Q() we need to upperbound the expectation of the dot product of W given the Fisher matrix. Because we assume our Fisher matrices invertible, and because a covariance matrix is at least semi-definite, our Fisher matrices are positive definite matrix. Hence every eigenvalue is positive and the trace of the Fisher matrix is greater than its maximum eigenvalue. From basic properties of the variance covariance matrix, if we denote by N the number of parameters in the network we obtain the following upperbound for the training factor:\nN Ew~2()(L(A; W)) L(A;w) + Tr(I'(w)'I'(w)I'(w)\nWhen it comes to the trace of the inverse, we approximate it by the closest lower bound with the inverse of the trace like inWei et al.(2015)\nOur generalization factor corresponds to the KL divergence between the approximation of our. posterior Q() and the prior P(a). Because both distributions are multivariate gaussians, we have a. direct formulation of the KL which is always definite since the Fisher matrices are invertible.\nN N2 w~2()(L(A; W)) L(A;w) + / Tr(Iy(w)IA(w)Iy(w)T)\ndet(Ix'(W*)) KL(Q() II P(a)) = N+Tr(Ix(W*)Iy'(W*))+(Wx-W*)TIx(W*)(W*-W*) n 2 det(IY'(W))\nN KL(Q() l| P(a)) N Tr(Iy(w)Ix'(w))\nIn the previous subsections, we proposed independent approximations of both our sub-criteria: th training factor and the generalization factor. However the scale of our approximations may nc be balanced so we sum up our criterion with an hyperparameter factor y which counterparts th difference of scale between the factors:\nWe approximate the expected Fisher matrices on the underlying distribution Y and X by the observe. Fisher matrices on a set of data sampled from those distributions. This approximation is relevant due to the consistenty of the MLE.\nAs we are in a pool-based selection case, we dispose at first of two sets of data: A and U which denote respectively the annotated observed data and unlabeled data. Note that the derivatives in the Fisher matrix computation implies to know the label of the samples. Thus at each active learning step, an unknown label is approximated by its prediction from the current trained network. We denote by S the subset of data to be queried by an oracle. The size of S is fixed with | S |= K. S is the subset sampled from Y while U is sampled from X. Finally an approximation of F will be:\nN N2 N F X + N / Tr(Is(w)IA(w)Is(w)T Tr(Is(w)I*\nN N2 te(1,L) Tr(s,(w)A,(w)s,(w)T)Tr(Ts,(w)TA,(w)Ts,(w)T) Tr(Ws,1(w)yui(w))Tr(Ts,(w)Tui(w) lE(1,L) N te(1,L) Tr(s,(w)4ui(w))Tr(Ts,t(w)Tui(w))\n4 ACTIVE LEARNING AS A GREEDY SELECTION SCHEME ON THE VARIATIATIONAL FREE ENERGY\nThe selected subset S selected at one step of active learning is only involved through the kronecket product of the Fisher matrix Is(w). We express our approximation of the free energy by a criterion.\nOur first approximation consists in assuming that the MLE parameter w of the currently trained network is a good approximator of both optimal parameters W*, W* like in|Zhang & Oles|(2000) We also upper bound the determinant with a function of the trace and the number N of parameters When it comes to the trace of the inverse, we approximate it again by the closest lower bound with the inverse of the trace.\nN N2 N / Tr(Iy(w)IA(W)Iy(W)T Tr(Iy(w)Ix'(w))\nNow we express the trace based on the approximation of the Fisher matrix: we consider that every Fisher matrix for CNN is a L diagonal block matrix, with L the number of layers of the CNN. Every block is made of a kronecker product of two terms and t. We rely on the properties involved by the choice of this specific matrix topology to obtain a more computationally compliant approximation of F in Eq. (18):\nOnin bset S in Eq. (19) N N2 A.U /t ie(1,L) Tr(s,(w)s,t(w)s,(w)T)Tr(Ts,(w)Ts,t(w)Ts,(w)T) Tr(ys,(w)yu.(w))Tr(Ts,(w)Ti + N ln lE(1,L) N lE(1.L) Tr(s,(w)4ui(w))Tr(Ts,(w)ruj(w))\nPseudo-code and illustration of the algorithm are provided in table[1in appendix\nWe demonstrate the validity of our approach on two datasets: MNIST (28-by-28 pictures, 50.000 training samples, 10.0000 validation samples and 10.000 test samples) and USPS (16-by-16 pictures 4185 training samples, 464 validation samples and 4649 testing samples) both gray scaled digits image datasets. We describe the CNN configuration and the hyperparameters settings in table2|in appendix. Note that we do not optimize the hyperparameters specifically for the size of the current annotated training set A. We picked those two similar datasets to judge of the robustness of our method against different size of unlabeled datasets, as expected our method is efficient on both small and large databases."}, {"section_index": "4", "section_name": "5.1 TEST ERROR", "section_text": "We run 10 runs of experiments and average the error on the test set of the best validation error before. a pass of active learning. We start from an annotated training set of the size of one minibatch selected. randomly. We stop both set of experiments after 30% of the training set has been selected (15.000. image for MNIST, 1255 for USPS). We compare the lowest test error achieved so far by our MLE. based method against naive baselines: uncertainty sampling, curriculum sampling and a random. selection of a minibatch of examples. We measure both uncertainty and curriculum scores based on the log likelihood of a sample using as label its prediction on the full network. While uncertainty. selects samples with the highest log likelihood, our version of curriculum does the exact contrary. We. select randomly the set of possible queries D among the unlabeled training data. Its size is set to 30. times the minibatch size. We present the results in two phases for the sake of clarity in figure 1|for. MNIST and figure 2|for USPS: the first rounds of active learning when the annotated training set is. almost empty, and the second round which is more stable in the evolution of the error. In both phases. and for both databases we observe a clear difference between the test error achieved by our MLE. method with the test error obtained by selecting randomly the data to be queried. Moreover the error. achieved by our method on 30 % is close (even equivalent in the case of USPS), to the error achieved. using the standard full training sets defined for both datasets (this error rate is defined as yellow line. groundtruth on the figures). The experiments made appear that curriculum learning is not a good. active learning strategy for both tested datasets. As for the uncertainty selection, it works really well. on MNIST while it fails on USPS. While MNIST is a pretty clean database, USPS contains more. outliers and noisy samples rendering it more difficult in terms of accuracy even though both databases. are designed to assess digit classification. As other works we mentioned in the related work section. we are led to explain uncertainty selection to select useless samples with the amount of outliers and. noisy samples in USPS.\nFinally we estimate our subset S by a greedy procedure: to be more robust to outliers and for reasons of computational efficiency, we select first a pool of samples D C U which we will use as the set of possible queries. We recursively build S C D by picking the next sample x; E D which minimizes C(S U {xi}; A,U) among all remaining samples in D. When it comes to the training factor coefficient, we notice that it is a quadratic term in Is(w) which increases the complexity in a greedy selection scheme. Our choice is to estimate the trace in the following way:\nTr(YSU{x}.l(W)WA,l(W)Y w)) ~ Tr(ws1(w)w s1(w)ws1(w))+Tr(ws ,l(W)YA,l(W)Y{x}.l(W)T\nCr(Wsux}1(w)yA1(w)ysi 1(w)) ~ Tr(ws1(w)ws1(w)ws1(w))+Tr(Ws t,1(W)YA,l(W)Y{x}.l(W)T\n60 25 50 20 40 energy 15 energy 30 random random uncertanty 10 uncertainty curriculum 20 curriculum groundtruth 5 10 0. 0. 2 4 6 8 10 12 5 10 15 20 25 30 35 0\nFigure 1: Error on the test set splits in two figures : the first rounds of active learning and the second round which is more stable in the evolution of the error (MNIST)\n70 60 60 50 50 40 energy 40 energy random curriculum 30 uncertainty 30 random curriculum - uncertainty 20 groundtruth 20 10 10 0+ 0+ 0 1 2 3 4 5 6 7 8 9 5 10 15 20 25 30 35\nFigure 2: Error on the test set splits in two figures : the first rounds of active learning and the second round which is more stable in the evolution of the error (USPS).\n40 35 30 aeennnss 25 20 S 15 average processing time 10 5 0 8 16 32 64 128 size of the query.\nFigure 3: Average processing time for one pass of our active learning strategy vs the size of th selected samples for annotation (USPS).\nTo validate our method in terms of scalability and time complexity, we measured in seconds, the. current processor time for one pass of active learning. We repeated this evaluation for different size. of query (8 to 128 unlabeled samples added to the currrent training set). For this experiments we used a laptop with a Titan-X (GTX 980 M) with 8 GB RAM GPU memory. Metrics were reported in. figure[3] Our criterions takes few seconds to select a batch of query of hundreds of unlabeled data. Moreover the evolution of the time given the size of the query is less than linear.."}, {"section_index": "5", "section_name": "6 DISCUSSION", "section_text": "The first point to raise is that our approximation of the posterior is an asymptotic distribution whicl may be unstable on a small subset of observed data, as it is the case for active learning. Sucl a distribution may be regularized by taking the probability provided by the central limit theoren about how well our data fits to the asymptotic gaussian distribution. When it comes to the KFAC. approximation, it suffers from the same issue and could be regularized when evaluating on smal. subset. A refinement of the approximations, especially for the generalization factor, following th. approaches of submodular functions may be investigated..\nFinally, an interesting observation is that our formulation of the variational free energy finds similari. ties with other MLE based active learning criteria previously proposed in the litterature. Indeed, in. Zhang & Oles[(2000) the authors study active learning by looking among the possible resampling of the input distribution. They formulate their criterion as the minimization of the trace of the inverse Fisher of the resampled distribution multiplied by the Fisher matrix on the input distribution:. mins Tr(Is1(w)Iu(w))\nIn a nutshell, we proposed a scalable batch active learning framework for deep networks relying on a variational approximation to perform bayesian inference. We deduced a formulation of the posterior and prior distributions of the weights using statistical knowledge on the Maximum Likelihood Estimator. Those assumptions combined with an existing approximation of the Fisher information for neural network, lead us to a backpropagation free active criterion. Eventually we used our own approximations to obtain a greedy active selection scheme.\nOur criterion is the first of the kind to scale batch active learning to deep networks, especially Convolutional Neural Networks. On different databases, it achieves better test accuracy than random sampling, and is scalable with increasing size of queries. It achieves near optimal error on the test set using a limited percentage (30%) of the annotated training set on larger and more reduced dataset. Our works demonstrated the validity of batch mode active learning for deep networks and the promise of the KFAC approximations for deep Fisher matrices for the active learning community. Such a. solution is also interesting as a new technique for curriculum learning approach.."}, {"section_index": "6", "section_name": "REFERENCES", "section_text": "Choromanska. Anna. Henaff. Mikael. Mathieu, Michael, Arous, Gerard Ben, and LeCun, Yann. The loss surfaces of multilaver networks. In A1STATS. 2015\nCohn, David A. Neural network exploration using optimal experiment design. 1994.\nGoodfellow, I. J., Shlens, J., and Szegedy, C. Explaining and Harnessing Adversarial Example ICLR 2015, December 2015.\nHoi, Steven C. H., Jin, Rong, Zhu, Jianke, and Lyu, Michael R. Batch mode active learning and its application to medical image classification. ICML '06, pp. 417-424, New York, NY, USA, 2006\nWe believe our method is a proof of concept for the use of variational inference for active learning on. deep neural networks. However our approximations are subject to improvements which may lead to faster convergence and lower generalization error.\nMartens, James and Grosse, Roger. Optimizing neural networks with kronecker-factored approximate curvature. arXiv preprint arXiv:1503.05671, 2015.\nSzegedy, Christian, Zaremba, Wojciech, Sutskever, Ilya, Bruna, Joan, Erhan, Dumitru, Goodfellow Ian, and Fergus, Rob. Intriguing properties of neural networks. In International Conference on Learning Representations. 2014. URLhttp://arxiv.org/abs/1312.6199\nZhou, Shusen, Chen, Qingcai, and Wang, Xiaolong. Active deep networks for semi-supervised sentiment classification. In ACL International Conference on Computational Linguistics, pp. 1515-1523, 2010.\nTalwalkar, Ameet. Matrix Approximation for Large-scale Learning. PhD thesis, New York, NY USA, 2010.\nAlgorithm 1: Greedy selection of the final query S. Require : A set of initial annotated training examples. Require :l set of initial unlabeled training examples. Require : D set of possible queries, D C U. Require :N number of parameters Require : L number of layers. Require :K number of samples to query. Require : scaling hyperparameter 1 S ={}; D =D 2 for k [|1,L|] do 3 # init the coefficients of each layer with O value:. for x E Dk do 4 for l in [|1, L |] do 5 Vk=[0,0,0,0] 6 7 end end 8 # Compute the inverse Fisher information on the whole input distribution. 9 10 for l in [|1, L|] do a1=yi(A)-1;bl=n(A)-1 11 c =yr(U)-1;d=T(U)-1 12 13 end # coefficient for the greedy selection for xE Dk do. 14 15 for l in 1, L ] do Vi,0(x) =Tr(Yx,taT,1) 16 Vi,1(x) = Tr(Tx,1blTT,l) 17 Vi.2(x) = Tr(Wx,lCl) 18 19 Vi,3(x) = Tr(Tx,ldl) 20 end 21 # compute the trace Zo(x) =(x+1)2ie(1,L)(Vi,o +Vi,o)(V,i + Vi,1) 22 Z1(x) =(k+1)2te(1,L)(Vi,2+Vi,2)(V;3 + Vi,3) 23 24 end # select the best sample in Dk based on the criterion C:. 25 N Sk+1 Sk U{xk} 26 Dk+1Dk\\{xk} 27 28 end 29 S SK a'If blocks are too big, an approximation of the inverse by a Woodburry-Nystrom is processed T: (2010)\nAlgorithm 1: Greedy selection of the final query S\nZo(x) = (k+1)2tE(1,L)(Vi,o + Vi,o)(Vhi + Vi,1) Z1(x) = tE(1,L)(Vf,2+Vi,2)(V,3+Vi,3)\n\"If blocks are too big, an approximation of the inverse by a Woodburry-Nystrom is processed Talwalkar (2010)\nTable 1: Pseudo code for the greedy selection given the variatiational free energy\nFor reasons of computational efficiency; we first sample a larger subset D C U from which we select the queries. To assert that this subset is not detrimental for uncertainty selection, we present a one shot experiments on USPS with D = U in fig ??.\nTable 2: Set of hyper parameters used respectively on the CNN for MNIST and SVHN\nFigure 4: Error on the test set when we do not sample first a larger subset where to pick the querie (D = U, USPS)\nhyper parameters MNIST USPS # filters [20, 20] [20, 20] filter size [(3,3), (3,3)] [(3,3), (3,3)] pooling size (no stride) [(2,2), (2,2)] [None, (2,2)] activation Rectifier Rectifier # neurons in full layers [200, 200, 50, 10] [300, 50, 10] # batch size 64 8\n80 70 60 50 energy 40 -- uncertainty 30 20 10 0 0 5 10 15 20 25 30 35"}] |
ryXZmzNeg | [{"section_index": "0", "section_name": "IMPROVING SAMPLING FROM GENERATIVE AUTOENCODERS WITH MARKOV CHAINS", "section_text": "Antonia Creswell. Kai Arulkumaran & Anil A. Bharath\nWe focus on generative autoencoders, such as variational or adversarial autoen coders, which jointly learn a generative model alongside an inference model. Gen erative autoencoders are those which are trained to softly enforce a prior on the latent distribution learned by the inference model. We call the distribution to which the inference model maps observed samples, the learned latent distribu- tion, which may not be consistent with the prior. We formulate a Markov chain Monte Carlo (MCMC) sampling process, equivalent to iteratively decoding and encoding, which allows us to sample from the learned latent distribution. Since. the generative model learns to map from the learned latent distribution, rather than the prior, we may use MCMC to improve the quality of samples drawn from the generative model, especially when the learned latent distribution is far from the prior. Using MCMC sampling, we are able to reveal previously unseen differ- ences between generative autoencoders trained either with or without a denoising criterion."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Unsupervised learning has benefited greatly from the introduction of deep generative models. In particular, the introduction of generative adversarial networks (GANs) (Goodfellow et al.]2014) and variational autoencoders (VAEs) (Kingma & Welling2014) Rezende et al.|2014) has led to a plethora of research into learning latent variable models that are capable of generating data from. complex distributions, including the space of natural images (Radford et al.]2015). Both of these. models, and their extensions, operate by placing a prior distribution, P(Z), over a latent space. Z C Rb, and learn mappings from the latent space, Z, to the space of the observed data, X C Ra..\nWe are interested in autoencoding generative models, models which learn not just the generative. mapping Z +> X, but also the inferential mapping X +> Z. Specifically, we define generative. autoencoders as autoencoders which softly constrain their latent distribution, to match a specified prior distribution, P(Z). This is achieved by minimising a loss, prior, between the latent distribu-. tion and the prior. This includes VAEs (Kingma & Welling2014]Rezende et al.[2014), extensions of VAEs (Kingma et al.]2016), and also adversarial autoencoders (AAEs) (Makhzani et al.]2015). Whilst other autoencoders also learn an encoding function, e : Ra -> Z, together with a decoding. function, d : Rb -> X, the latent space is not necessarily constrained to conform to a specified. probability distribution. This is the key distinction for generative autoencoders; both e and d can. still be deterministic functions (Makhzani et al.2015).\nThe process of encoding and decoding may be interpreted as sampling the conditional probabilities Qo(Z|X) and Pe(X|Z) respectively. The conditional distributions may be sampled using the en- coding and decoding functions e(X; ) and d(Z; 0), where and 0 are learned parameters of the"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "The functions e and d are defined for any input from Ra and R' respectively, however the outputs. of the functions may be constrained practically by the type of functions that e and d are, such that e maps to Z C R and d maps to X C Ra. During training however, the encoder, e is only fed with. training data samples, x E X and the decoder, d is only fed with samples from the encoder, z E Z and so the encoder and decoder learn mappings between X and Z..\nencoding and decoding functions respectively. The decoder of a generative autoencoder may be used to generate new samples that are consistent with the data. There are two traditional approaches for sampling generative autoencoders:.\nApproach 1 (Bengio et al. 2014)\nxo ~ P(X), zo ~ Qo(Z|X x0), x1 ~ P(X|Z Z0\nwhere P(Z) is the prior distribution enforced during training and Pe(X|Z) is the decoder trained to map samples drawn from Qo(Z[X) to samples consistent with P(X ). This approach assumes that f Qg(Z|X)P(X)dX = P(Z), suggesting that the encoder maps all data samples from P(X) to a distribution that matches the prior distribution, P(Z). However, it is not always true that f Qg(Z|X)P(X)dX = P(Z). Rather Q(Z|X) maps data samples to a distribution which we call, P(Z):\nQg(Z|X)P(X)dX = P(Z)\nwhere it is not necessarily true that P(Z) = P(Z) because the prior is only softly enforced. The decoder, on the other hand, is trained to map encoded data samples (i.e. samples from I Qo(Z|X)P(X)dX) to samples from X which have the distribution P(X). If the encoder maps observed samples to latent samples with the distribution P(Z), rather than the desired prior distri- bution, P(Z), then:\nPeX|Z)P(Z)dZ F P(X)\nThis suggests that samples drawn from the decoder, Pe(X|Z), conditioned on samples drawn from the prior, P(Z), may not be consistent with the data generating distribution, P(X). However, by conditioning on P(Z):\nThis suggests that to obtain more realistic generations, latent samples should be drawn via z ~ P(Z). rather than z ~ P(Z), followed by x ~ Pe(X|Z). A limited number of latent samples may be drawn from P(Z) using the first two steps in Approach 1 - however this has the drawbacks discussed. in Approach 1. We introduce an alternative method for sampling from P(Z) which does not have the same drawbacks.\nOur main contribution is the formulation of a Markov chain Monte Carlo (MCMC) sampling process for generative autoencoders, which allows us to sample from P(Z). By iteratively sampling the chain, starting from an arbitrary zt=o E Rb, the chain converges to zt-oo ~ P(Z), allowing. us to draw latent samples from P(Z) after several steps of MCMC sampling. From a practical perspective, this is achieved by iteratively decoding and encoding, which may be easily applied to existing generative autoencoders. Because P(Z) is optimised to be close to P(Z), the initial sample,. Zt=o can be drawn from P(Z), improving the quality of the samples within a few iterations..\nWhen interpolating between latent encodings, there is no guarantee that z stays within high density. regions of P(Z). Previously, this has been addressed by using spherical, rather than linear interpo. lation of the high dimensional Z space (White]2016). However, this approach attempts to keep z\nwhere P(X) is the data generating distribution. However, this approach is likely to generate samples similar to those in the training data, rather than generating novel samples that are consistent with the. training data.\nPe(X|Z)P(Z)dZ = P(X)\nQg(Z|X) XCRa ZCRb x P(X) d(z; 0) P(Z) 7. e(x; ) d(z; 0) Z. P(Z) Pe(X|Z)\n(a) VAE (initial) (b) VAE (5 steps) (c) VAE (initial) (d) VAE (5 steps)\nFigure 2: Prior work: Spherically interpolating (White2016) between two faces using a VAE (a c). In (a), the attempt to gradually generate sunglasses results in visual artifacts around the eyes In (c), the model fails to properly capture the desired change in orientation of the face, resultin in three partial faces in the middle of the interpolation. This work: (b) and (d) are the result o 5 steps of MCMC sampling applied to the latent samples that were used to generate the origina interpolations, (a) and (c). In (b), the discolouration around the eyes disappears, with the mode settling on either generating or not generating glasses. In (d), the model moves away from multiple faces in the interpolation by producing new faces with appropriate orientations.\nFigure 1: P(X) is the data generating distribution. We may access some samples from P(X) by drawing samples from the training data. Qo(Z|X) is the conditional distribution, modeled by an encoder, which maps samples from Ra to samples in Rb. An ideal encoder maps samples from. P(X) to a known, prior distribution P(Z): in reality the encoder maps samples from P(X) to an unknown distribution P(Z). Pe(X|Z) is a conditional distribution, modeled by a decoder, which maps samples from Rb to Ra. During training the decoder learns to map samples drawn from P(Z). to P(X) rather than samples drawn from P(Z) because the decoder only sees samples from P(Z). Regularisation on the latent space only encourages P(Z) to be close to P(Z). Note that if Lprior is. optimal, then P(Z) overlaps fully with P(Z).\nwithin P(Z), rather than trying to sample from P(Z). By instead applying several steps of MCMC sampling to the interpolated z samples before sampling Pe(X|Z), unrealistic artifacts can be re. duced (see Figure2). Whilst most methods that aim to generate realistic samples from X rely on adjusting encodings of the observed data (White2016), our use of MCMC allows us to walk any latent sample to more probable regions of the learned latent distribution, resulting in more convinc- ing generations. We demonstrate that the use of MCMC sampling improves generations from both VAEs and AAEs with high-dimensional Z; this is important as previous studies have shown that the dimensionality of Z should be scaled with the intrinsic latent dimensionality of the observed data\nOur second contribution is the modification of the proposed transition operator for the MCMC sam. pling process to denoising generative autoencoders. These are generative autoencoders trained us-. ing a denoising criterion, (Seung]1997)[Vincent et al.[|2008). We reformulate our original MCMC sampling process to incorporate the noising and denoising processes, allowing us to use MCMC. sampling on denoising generative autoencoders. We apply this sampling technique to two models. The first is the denoising VAE (DVAE) introduced by Im et al. (2015). We found that MCMC sam- pling revealed benefits of the denoising criterion. The second model is a denoising AAE (DAAE). constructed by applying the denoising criterion to the AAE. There were no modifications to the cost function. For both the DVAE and the DAAE, the effects of the denoising crtierion were not immedi ately obvious from the initial samples. Training generative autoencoders with a denoising criterion. reduced visual artefacts found both in generations and in interpolations. The effect of the denoising. criterion was revealed when sampling the denoising models using MCMC sampling..\nOne of the main tasks in machine learning is to learn explanatory factors for observed data commonly known as inference. That is, given a data sample x E X C Ra, we would like. to find a corresponding latent encoding z E Z C Rb. Another task is to learn the inverse,. generative mapping from a given z to a corresponding x. In general, coming up with a suit-. able criterion for learning these mappings is difficult. Autoencoders solve both tasks efficiently. by jointly learning an inferential mapping e(X; $) and generative mapping d(Z;0), using unla-. belled data from X in a self-supervised fashion (Kingma & Welling2014). The basic objec-. tive of all autoencoders is to minimise a reconstruction cost, Lreconstruct, between the original. data, X, and its reconstruction, d(e(X; $); 0). Examples of reconstruct include the squared error. loss, N=1|d(e(xn;$); 0) - xn||2, and the cross-entropy loss, H[P(X)|P(d(e(X; $); 0)] = n=1xn log(d(e(xn;$); 0)) + (1 - xn) log(1- d(e(xn; $); 0))."}, {"section_index": "3", "section_name": "2.1 GENERATIVE AUTOENCODERS", "section_text": "Consider the case where e is constructed with stochastic neurons that can produce outputs from a specified probability distribution, and prior is used to constrain the distribution of outputs to P(Z) This leaves the problem of estimating the gradient of the autoencoder over the expectation EQ (z|X): which would typically be addressed with a Monte Carlo method. VAEs sidestep this by constructing latent samples using a deterministic function and a source of noise, moving the source of stochas ticity to an input, and leaving the network itself deterministic for standard gradient calculations-- technique commonly known as the reparameterisation trick (Kingma & Welling2014). e(X; then consists of a deterministic function, erep(X; ), that outputs parameters for a probability distri bution, plus a source of noise. In the case where P(Z) is a diagonal covariance Gaussian, erep(X;\nAutoencoders may be cast into a probablistic framework, by considering samples x ~ P(X) and z ~ P(Z), and attempting to learn the conditional distributions Qo(Z|X) and Pe(X|Z) as e(X; $) and d(Z; 0) respectively, with Lreconstruct representing the negative log-likelihood of the recon-. struction given the encoding (Bengio] 2009). With any autoencoder, it is possible to create novel x E X by passing a z E Z through d(Z; 0), but we have no knowledge of appropriate choices of z beyond those obtained via e(X; $). One solution is to constrain the latent space to which the encod- ing model maps observed samples. This can be achieved by an additional loss, Lprior, that penalises. encodings far away from a specified prior distribution, P(Z). We now review two types of gener-. ative autoencoders, VAEs (Kingma & Welling2014) Rezende et al.]2014) and AAEs (Makhzani et al.]2015), which each take different approaches to formulating prior.\nFigure 3: Reconstructions of faces from a DVAE trained with additive Gaussian noise: Q(X|X) N(X, 0.25I). The model successfully recovers much of the detail from the noise-corrupted images\nTprior 2n= Another approach is to deterministically output the encodings z. Rather than minimising a met ric between probability distributions using their parameters, we can turn this into a density ratic estimation problem where the goal is to learn a conditional distribution, Q(Z|X), such that the distribution of the encoded data samples, P(Z) = f Q(Z|X)P(X)dX, matches the prior distri bution, P(Z). The GAN framework solves this density ratio estimation problem by transforming i1 into a class estimation problem using two networks (Goodfellow et al.|2014). The first network ir GAN training is the discriminator network, D, which is trained to maximise the log probability o1 samples from the \"real' distribution, z ~ P(Z), and minimise the log probability of samples fron the \"fake\" distribution, z ~ Qo(Z|X). In our case e(X; ) plays the role of the second network the generator network, G$, which generates the \"fake\"' samples|'| The two networks compete in a minimax game, where G receives gradients from Dy, such that it learns to better fool Dy. The training objective for both networks is given by prior = argming argmax., Ep(z) [log(D(Z))]+ Ep(x)[log(1- Dy(Gg(X)))] = argming argmaxy, Ep(z)[log(Dy(Z))]+ EQ(Z|x)P(x) log[1 - Ds(Z)]. This formulation can create problems during training, so instead Gg is trained to minimise log(Dw(G(X))), which provides the same fixed point of the dynamics of Gs and D. The result of applying the GAN framework to the encoder of an autoencoder is the deterministic AAE (Makhzani et al.]2015)."}, {"section_index": "4", "section_name": "2.2 DENOISING AUTOENCODERS", "section_text": "In a more general viewpoint, generative autoencoders fulfill the purpose of learning useful repre. sentations of the observed data. Another widely used class of autoencoders that achieve this are. denoising autoencoders (DAEs), which are motivated by the idea that learned features should be. robust to \"partial destruction of the input' (Vincent et al.]20o8). Not only does this require en- coding the inputs, but capturing the statistical dependencies between the inputs so that corrupted data can be recovered (see Figure 3. DAEs are presented with a corrupted version of the input, x E X, but must still reconstruct the original input, x E X, where the noisy inputs are cre-. ated through sampling x ~ C(X|X), a corruption process. The denoising criterion, Ldenoise,. can be applied to any type of autoencoder by replacing the straightforward reconstruction cri-. terion, Lreconstruct(X, d(e(X; $); 0)), with the reconstruction criterion applied to noisy inputs:. Lreconstruct(X, d(e(X; ); 0)). The encoder is now used to model samples drawn from Q(Z|X) As such, we can construct denoising generative autoencoders by training autoencoders to minimise. Ldenoise + Lprior:\nOne might expect to see differences in samples drawn from denoising generative autoencoders and their non-denoising counterparts. However, Figures 4|and|6|show that this is not the case. Im et al.\nWe adapt the variables to better fit the conventions used in the context of autoencoders\nmaps x to a vector of means, E Rb, and a vector of standard deviations, o E R, with the noise e ~ N(0, I). Put together, the encoder outputs samples z = + e O , where O is the Hadamard product. VAEs attempt to make these samples from the encoder match up with P(Z) by using the KL divergence between the parameters for a probability distribution outputted by erep(X; ), and the parameters for the prior distribution, giving Lprior = DkL[Q(Z|X)|P(Z)]. A multivariate Gaussian has an analytical KL divergence that can be further simplified when considering the unit\n(2015) address the case of DVAEs, claiming that the noise mapping requires adjusting the original VAE objective function. Our work is orthogonal to theirs, and others which adjust the training or model (Kingma et al.|2016), as we focus purely on sampling from generative autoencoders after training. We claim that the existing practice of drawing samples from generative autoencoders conditioned on z ~ P(Z) is suboptimal, and the quality of samples can be improved by instead conditioning on z ~ P(Z) via MCMC sampling."}, {"section_index": "5", "section_name": "3 MARKOV SAMPLING", "section_text": "We now consider the case of sampling from generative autoencoders, where d(Z; 0) is used to draw samples from Pe(X[Z). In Section[1] we showed that it was important, when sampling Pe(X[Z) to condition on z's drawn from P(Z), rather than P(Z) as is often done in practice. However, we. now show that for any initial zo E Zo = Rb, Markov sampling can be used to produce a chain of. samples zt, such that as t -> oo, produces samples zt that are from the distribution P(Z), which. may be used to draw meaningful samples from Pe(X|Z), conditioned on z ~ P(Z). To speed up. convergence we can initialise zo from a distribution close to P(Z), by drawing zo ~ P(Z).."}, {"section_index": "6", "section_name": "3.1 MARKOV SAMPLING PROCESS", "section_text": "A generative autoencoder can be sampled by the following process\nZo E Zo = Rb. Xt+1 ~ Pe(X|Zt) Zt+1 ~ Qq(Z|Xt+1\nThis allows us to define a Markov chain with the transition operator\nT(Zt+1|Zt) = 2o(Zt+1X)Pe(X|Zt)dX\nDrawing samples according to the transition operator T(Zt+1 Zt) produces a Markov chain. For the. transition operator to be homogeneous, the parameters of the encoding and decoding functions are fixed during sampling.\nWe now show that the stationary distribution of sampling from the Markov chain is P(Z)\nemma 1. T(Zt+1[Zt.) defines an ergodic Markov chain\nProof. For a Markov chain to be ergodic it must be both irreducible (it is possible to get from any state to any other state in a finite number of steps) and aperiodic (it is possible to get from any state to any other state without having to pass through a cycle). To satisfy these requirements, it is more than sufficient to show that T(Zt+1|Zt) > 0, since every z E Z would be reachable from every other z E Z. We show that Pe(X|Z) > 0 and Qg(Z|X) > 0, giving T(Zt+1|Zt) > 0, providing the proof of this in Section[A|of the supplementary material.\nProof. For the transition operator defined in Equation (1), the asymptotic distribution to whicl T(Zt+1|Zt) converges to is P(Z), because P(Z) is, by definition, the marginal of the joint distribu tion Q(Z|X) P(X), over which the Lprior used to learn the conditional distribution Q(Z|X).\nTheorem 1. If T(Zt+1|Zt) defines an ergodic Markov chain, {Z1, Z2...Zt}, then the chain will converge to a stationary distribution, I(Z), from any arbitrary initial distribution. The stationary distribution II(Z) = P(Z).\nZo E Zo = Rb Xt+1~ Pe(X|Zt), Xt+1 ~ C(X|Xt+1), Zt+1 ~ Qg(Z|Xt+1)\nThis allows us to define a Markov chain with the transition operator\nT(Zt+1|Zt) = (Zt+1|X)C(X|X)Pe(X|Zt)dXdX\nThe same arguments for the proof of conver ence of Equation (1) can be applied to Equation (2"}, {"section_index": "7", "section_name": "3.4 RELATED WORK", "section_text": "Our work is inspired by that of Bengio et al. (2013); denoising autoencoders are cast into a proba bilistic framework, where P(X|X) is the denoising (decoder) distribution and C(X|X) is the cor ruption (encoding) distribution. X represents the space of corrupted samples. Bengio et al. (2013 define a transition operator of a Markov chain - using these conditional distributions - whose sta tionary distribution is P(X) under the assumption that Pe(X|X) perfectly denoises samples. Th chain is initialised with samples from the training data, and used to generate a chain of sample from P(X). This work was generalised to include a corruption process that mapped data samples t latent variables (Bengio et al.|2014), to create a new type of network called Generative Stochasti Networks (GSNs). However in GSNs (Bengio et al.]2014) the latent space is not regularised with prior.\nOur work is similar to several approaches proposed by Bengio et al. (2013] 2014) and Rezende et al. (Rezende et al.]2014). Both Bengio et al. and Rezende et al. define a transition operator in terms of Xt and Xt-1. Bengio et al. generate samples with an initial Xo drawn from the observed data, while Rezende et al. reconstruct samples from an Xo which is a corrupted version of a data sample. In contrasts to Bengio et al. and Rezende et al., in this work we define the transition operator in terms of Zt+1 and Zt, initialise samples with a Zo that is drawn from a prior distribution we can directly sample from, and then sample X1 conditioned on Zo. Although the initial samples may be poor, we are likely to generate a novel X1 on the first step of MCMC sampling, which would not be achieved using Bengio et al.'s or Rezende et al.'s approach. We are able draw initial Zo from a prior because we constrain P(Z) to be close to a prior distribution P(Z); in Bengio et al. a latent space is either not explicitly modeled (Bengio et al.[2013) or it is not constrained (Bengio et al.2014).\nUsing Lemmas1and2 with Theorem[1] we can say that the Markov chain defined by the transition operator in Equation (1) will produce a Markov chain that converges to the stationary distribution II(Z) = P(Z)\nFurther, Rezende et al. (2014) explicitly assume that the distribution of latent samples drawn from Qo(Z|X) matches the prior, P(Z). Instead, we assume that samples drawn from Qo(Z|X) have a distribution P(Z) that does not necessarily match the prior, P(Z). We propose an alternative method for sampling P(Z) in order to improve the quality of generated image samples. Our motivation is also different to Rezende et al. (2014) since we use sampling to generate improved, novel data samples, while they use sampling to denoise corrupted samples..\nThe choice of Lprior may effect how much improvement can be gained when using MCMC sam-. pling, assuming that the optimisation process converges to a reasonable solution. We first consider the case of VAEs, which minimise DkL[Qo(Z|X)[P(Z)]. Minimising this KL divergence pe- nalises the model P(Z) if it contains samples that are outside the support of the true distribution. P(Z), which might mean that P(Z) captures only a part of P(Z). This means that when sampling.\nGenerally speaking, using the reverse KL divergence during training, DkL[P(Z)||Qo(Z|X)], pe nalises the model Q(Z|X) if P(Z) produces samples that are outside of the support of P(Z). By. minimising this KL divergence, most samples in P(Z) will likely be in P(Z) as well. AAEs, on the. other hand are regularised using the JS entropy, given by DkL[P(Z)||(P(Z) + Q(Z|X))] + 1DkL[Q(Z|X)||(P(Z) + Q(Z|X))]. Minimising this cost function attempts to find a com promise between the aforementioned extremes. However, this still suggests that some samples from P(Z) may lie outside P(Z), and so we expect AAEs to also benefit from MCMC sampling."}, {"section_index": "8", "section_name": "4.1 MODELS", "section_text": "We utilise the deep convolutional GAN (DCGAN) (Radford et al.]2015) as a basis for our autoen-. coder models. Although the recommendations from Radford et al. (2015) are for standard GAN. architectures, we adopt them as sensible defaults for an autoencoder, with our encoder mimicking the DCGAN's discriminator, and our decoder mimicking the generator. The encoder uses strided. convolutions rather than max-pooling, and the decoder uses fractionally-strided convolutions rather than a fixed upsampling. Each convolutional layer is succeeded by spatial batch normalisation (Ioffe. & Szegedy2015) and ReLU nonlinearities, except for the top of the decoder which utilises a sig-. moid function to constrain the output values between O and 1. We minimise the cross-entropy. between the original and reconstructed images. Although this results in blurry images in regions. which are ambiguous, such as hair detail, we opt not to use extra loss functions that improve the. visual quality of generations (Larsen et al.] [2015]Dosovitskiy & Brox2016Lamb et al.]2016) to avoid confounding our results.\nAlthough the AAE is capable of approximating complex probabilistic posteriors (Makhzani et al. 2015), we construct ours to output a deterministic Q(Z|X). As such, the final layer of the encoder part of our AAEs is a convolutional layer that deterministically outputs a latent sample, z. The adversary is a fully-connected network with dropout and leaky ReLU nonlinearities. erep(X; ) of our VAEs have an output of twice the size, which corresponds to the means, , and standard deviations, , of a diagonal covariance Gaussian distribution. For all models our prior, P(Z), is a 200D isotropic Gaussian with zero mean and unit variance: N(0, I)."}, {"section_index": "9", "section_name": "4.2 DATASETS", "section_text": "Our primary dataset is the (aligned and cropped) CelebA dataset, which consists of 200,000 images of celebrities (Liu et al. 2015). The DCGAN (Radford et al.]2015) was the first generative neural network model to show convincing novel samples from this dataset, and it has been used ever since as a qualitative benchmark due to the amount and quality of samples. In Figures 7|and 8|of the supplementary material, we also include results on the SVHN dataset, which consists of 100,000 images of house numbers extracted from Google Street view images (Netzer et al.]2011)."}, {"section_index": "10", "section_name": "4.3 TRAINING & EVALUATION", "section_text": "For all datasets we perform the same preprocessing: cropping the centre to create a square image then resizing to 64 64px. We train our generative autoencoders for 20 epochs on the training split of the datasets, using Adam (Kingma & Ba2014) with a = 0.0002, 1 = 0.5 and 2 0.999. The denoising generative autoencoders use the additive Gaussian noise mapping C(X|X N(X, 0.25I). All of our experiments were run using the Torch library (Collobert et al. 2011\nFor evaluation, we generate novel samples from the decoder using z initially sampled from P(Z):. we also show spherical interpolations (White 2016) between four images of the testing split, as depicted in Figure 2] We then perform several steps of MCMC sampling on the novel samples and interpolations. During this process, we use the training mode of batch normalisation (Ioffe &\nExample code is available at https: / /github. com/Kaixhin/Autoencoders\nSzegedy 2015), i.e., we normalise the inputs using minibatch rather than population statistics, a the normalisation can partially compensate for poor initial inputs (see Figure 4) that are far fror the training distribution. We compare novel samples between all models below, and leave furthe interpolation results to Figures 5|and|6[of the supplementary material..\n(a) VAE (initial) (b) VAE (1 step) (c) VAE (5 steps) (d) VAE (10 steps) (e) DVAE (initial) (f) DVAE (1 step) (g) DVAE (5 steps) (h) DVAE (10 steps) (i) AAE (initial) (j) AAE (1 step) (k) AAE (5 steps) (1) AAE (10 steps) (m) DAAE (initial) (n) DAAE (1 step) (o) DAAE (5 steps) (p) DAAE (10 steps)\nFigure 4: Samples from a VAE (a-d), DVAE (e-h), AAE (i-l) and DAAE (m-p) trained on the CelebA dataset. (a), (e), (i) and (m) show initial samples conditioned on z ~ P(Z), which mainly result in recognisable faces emerging from noisy backgrounds. After 1 step of MCMC sampling, the more. unrealistic generations change noticeably, and continue to do so with further steps. On the other hand, realistic generations, i.e. samples from a region with high probability, do not change as much. The adversarial criterion for deterministic AAEs is difficult to optimise when the dimensionality of Z is high. We observe that during training our AAEs and DAAEs, the empirical standard deviation of z ~ Q(Z|X) is less than 1, which means that P(Z) fails to approximate P(Z) as closely as was. achieved with the VAE and DVAE. However, this means that the effect of MCMC sampling is more. pronounced, with the quality of all samples noticeably improving after a few steps. As a side-effect of the suboptimal solution learned by the networks, the denoising properties of the DAAE are more noticeable with the novel samples."}, {"section_index": "11", "section_name": "5 CONCLUSION", "section_text": "In our experiments, we compare samples x ~ Pe(X|Z = zo), zo ~ P(Z) to x ~ Pe(X|Z = zi) for i = {1, 5, 10}, where z;'s are obtained through MCMC sampling, to show that MCMC sampling improves initially poor samples (see Figure4). We also show that artifacts in x samples induced by interpolations across the latent space can also be corrected by MCMC sampling see (Figure 2). We further validate our work by showing that the denoising properties of denoising generative autoencoders are best revealed by the use of MCMC sampling.\nOur MCMC sampling process is straightforward, and can be applied easily to existing generative au- toencoders. This technique is orthogonal to the use of more powerful posteriors in AAEs (Makhzani et al.]2015) and VAEs (Kingma et al.] 2016), and the combination of both could result in further improvements in generative modeling. Finally, our basic MCMC process opens the doors to apply a large existing body of research on sampling methods to generative autoencoders."}, {"section_index": "12", "section_name": "ACKNOWLEDGEMENTS", "section_text": "We would like to acknowledge the EPSRC for funding through a Doctoral Training studentship and the support of the EPSRC CDT in Neurotechnology"}, {"section_index": "13", "section_name": "REFERENCES", "section_text": "Yoshua Bengio. Learning deep architectures for AI. Foundations and trends(R) in Machine Learning, 2(1) 1-127, 2009.\nYoshua Bengio, Li Yao, Guillaume Alain, and Pascal Vincent. Generalized denoising auto-encoders as gener ative models. In Advances in Neural Information Processing Systems, pp. 899_907, 2013..\nYoshua Bengio, Eric Thibodeau-Laufer, Guillaume Alain, and Jason Yosinski. Deep generative stochastic. networks trainable by backprop. In Journal of Machine Learning Research: Proceedings of the 31st Inter national Conference on Machine Learning, volume 32, 2014..\nRonan Collobert, Koray Kavukcuoglu, and Clement Farabet. Torch7: A matlab-like environment for machine learning. In BigLearn, NIPS Workshop, number EPFL-CONF-192376, 2011.\nAlexey Dosovitskiy and Thomas Brox. Generating images with perceptual similarity metrics based on deep networks. arXiv preprint arXiv:1602.02644, 2016.\nDaniel Jiwoong Im, Sungjin Ahn, Roland Memisevic, and Yoshua Bengio. Denoising criterion for variational auto-encoding framework. arXiv preprint arXiv:1511.06406, 2015.\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducin internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning (ICML 15), pp. 448-456, 2015.\nAutoencoders consist of a decoder, d(Z; 0) and an encoder, e(X; ) function, where and 0 are learned parameters. Functions e(X; ) and d(Z; 0) may be used to draw samples from the condi tional distributions Pe(X|Z) and Qo(Z|X) (Bengio et al.]2014}2013||Rezende et al.]|2014), where X refers to the space of observed samples and Z refers to the space of latent samples. The encoder distribution, Qo(Z|X), maps data samples from the data generating distribution, P(X), to a latent distribution, P(Z). The decoder distribution, Pe(X|Z), maps samples from P(Z) to P(X). We are concerned with generative autoencoders, which we define to be a family of autoencoders where regularisation is used during training to encourage P(Z) to be close to a known prior P(Z). Com- monly it is assumed that P(Z) and P(Z) are similar, such that samples from P(Z) may be used tc sample a decoder P(X|Z); we do not make the assumption that P(Z) and P(Z) are \"sufficiently close\"' (Rezende et al.]2014). Instead, we derive an MCMC process, whose stationary distribution is P(Z), allowing us to directly draw samples from P(Z). By conditioning on samples from P(Z) samples drawn from x ~ Pe(X|Z) are more consistent with the training data.\nDiederik P Kingma, Tim Salimans, and Max Welling. Improving variational inference with inverse autoregres. sive flow. arXiv preprint arXiv:1606.04934, 2016.\nZiwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceed- ings of the IEEE International Conference on Computer Vision, pp. 3730-3738, 2015.\nAlireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian Goodfellow. Adversarial autoencoders. arXiv preprint arXiv:1511.05644, 2015.\nAlec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In International Conference on Learning Representations (ICLR) 2016, arXiv preprint arXiv:1511.06434, 2015. URLhttps://arxiv.0rg/pdf/1511.06434.pdf\nJeffrey S Rosenthal. A review of asymptotic convergence for general state space markov chains. Far East J Theor. Stat, 5(1):37-50, 2001.\nAlex Lamb, Vincent Dumoulin, and Aaron Courville. Discriminative regularization for generative models arXiv preprint arXiv:1602.03220. 2016\nAnders Boesen Lindbo Larsen, Soren Kaae Sonderby, and Ole Winther. Autoencoding beyond pixels using a learned similarity metric. In Proceedings of The 33rd International Conference on Machine Learning,. arXiv preprint arXiv:1512.09300, pp. 1558-1566, 2015. URL http://jm1r.org/proceedings/ papers/v48/1arsen16.pdf\nYuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading dig its in natural images with unsupervised feature learning. In NIPs Workshop on Deep Learning and Un supervised Feature Learning, 2011. URLhttps://static.googleusercontent.com/media/ research.google.com/en//pubs/archive/37648.pdf"}, {"section_index": "14", "section_name": "Supplementary Material", "section_text": "For Pe(X|Z) > 0 we require that all possible x E X C Ra may be generated by the net. work. Assuming that the model Pe(X|Z) is trained using a sufficient number of training samples. x E Xtrain = X, and that the model has infinite capacity to model Xtrain = X, then we shoul. be able to draw any sample x E Xtrain = X from Pe(X|Z). In reality Xtrain X and it is no. possible to have a model with infinite capacity. However, Pe(X|Z) is modeled using a deep neura. network, which we assume has sufficient capacity to capture the training data well. Further, deep. neural networks are able to interpolate between samples in very high dimensional spaces (Radfor et al.||2015); we therefore further assume that if we have a large number of training samples (as wel as large model capacity), that almost any x E X can be drawn from Pe(X[Z)..\nFor Qo(Z|X) > 0 it must be possible to generate all possible z E Z C Rb. Qo(Z|X) is described by the function e(; $) : X -> Z. To ensure that Qo(Z|X) > 0, we want to show that the function e(X; ) allows us to represent all samples of z E Z. VAEs and AAEs each construct e(X; ) to produce z E Z in different ways.\nThe output of the encoder of a VAE, ey AE(X; ) is z = + e O , where e ~ N(0, I). The output of a VAE is then always Gaussian, and hence there is no limitation on the z's that ey AE(X; $) can. produce. This ensures that Qo(Z|X) > 0, provided that o 0..\nThe encoder of our AAE, eAAE(X; $), is a deep neural network consisting of multiple convolutional and batch normalisation layers. The final layer of the eAAE(X; ) is a fully connected layer without an activation function. The input to each of the M nodes in the fully connected layer is a function fi=1...m(x). This means that z is given by: z = a1f1(x) + a2f2(x) + ... + amfm(x), where a,=1...M are the learned weights of the fully connected layer. We now consider three cases:\nCase 1: If a; are a complete set of bases for Z then it is possible to generate any z E Z from an x E X with a one-to-one mapping, provided that f(x) is not restricted in the values that it can take\nCase 2: If a; are an overcomplete set of bases for Z, then the same holds, provided that f(x) is no restricted in the values that it can take.\nCase 3: If a; are an undercomplete set of bases for Z then it is not possible to generate all z E Z from x E X. Instead there is a many (X) to one (Z) mapping..\nFor Q(Z|X) > 0 our network must learn a complete or overcomplete set of bases and f(x) must. not be restricted in the values that it can take Vi. The network is encouraged to learn an overcomplete set of bases by learning a large number of a;'s-specifically M = 8192 when basing our network. on the DCGAN architecture (Radford et al.]2015)more that 40 times the dimensionality of Z. By using batch normalisation layers throughout the network, we ensure that values of f(x) are spread out, capturing a close-to-Gaussian distribution (Ioffe & Szegedyl2015), encouraging infinite support.\nWe have now shown that, under certain reasonable assumptions, P(X|Z) > 0 and Qo(Z|X) > 0 which means that T(Zt+1|Zt) > 0, and hence we can get from any Z to any another Z in only one step. Therefore the Markov chain described by the transition operator T(Zt+1 Zt) defined in Equation (1) is both irreducible and aperiodic, which are the necessary conditions for ergodicity..\nNote that if we wish to generate human faces, we define Xau to be the space of all possible faces. with distribution P(Xau), while Xtrain is the space of faces made up by the training data. Then, practically even a well trained model which learns to interpolate well only captures an X, with distri- bution f Pe(X|Z)P(Z)dZ, where Xtrain X Xau, because X additionally contains examples of interpolated versions of x ~ P(Xtrain).\n(a) DVAE (initial) (b) DVAE (5 steps) (c) DVAE (initial) (d) DVAE (5 steps)\nFigure 5: Interpolating between two faces using (a-d) a DVAE. The top rows (a, c) for each face is the original interpolation, whilst the second rows (b, d) are the result of 5 steps of MCMC sampling applied to the latent samples that were used to generate the original interpolation. The only qualita tive difference when compared to VAEs (see Figure4) is a desaturation of the generated images\n(a) AAE (initial) (b) AAE (5 steps) (c) AAE (initial) (d) AAE (5 steps) (e) DAAE (initial) (f) DAAE (5 steps) (g) DAAE (initial) (h) DAAE (5 steps)\nFigure 6: Interpolating between two faces using (a-d) an AAE and (e-h) a DAAE. The top rows (a c, e, g) for each face is the original interpolation, whilst the second rows (b, d, f, h) are the result of 5 steps of MCMC sampling applied to the latent samples that were used to generate the original interpolation. Although the AAE performs poorly (b, d), the regularisation effect of denoising can be clearly seen with the DAAE after applying MCMC sampling (f, h).\nC STREET VIEW HOUSE NUMBERS C.1 SAMPLES 3 32 6X (a) VAE (initial) (b) VAE (1 step) (c) VAE (5 steps) (d) VAE (10 steps) 48 48 10 10 (e) DVAE (initial) (f) DVAE (1 step) (g) DVAE (5 steps) (h) DVAE (10 steps) (i) AAE (initial) (j) AAE (1 step) (k) AAE (5 steps) (1) AAE (10 steps) 109 R (m) DAAE (initial) (n) DAAE (1 step) (o) DAAE (5 steps) (p) DAAE (10 steps)\nFigure 7: Samples from a VAE (a-d), DVAE (e-h), AAE (i-l) and DAAE (m-p) trained on the SVHN dataset. The samples from the models imitate the blurriness present in the dataset. Although very few numbers are visible in the initial sample, the VAE and DVAE produce recognisable numbers from most of the initial samples after a few steps of MCMC sampling. Although the AAE and DAAE fail to produce recognisable numbers, the final samples are still a clear improvement over the initial samples.\nINTERPOLATTONS 991111 (a) VAE (initial) 24618181611 (b) VAE (5 steps) 355222222 (c) VAE (initial) 55602157732 (d) VAE (5 steps) 111 (e) DVAE (initial) 1151018 (f) DVAE (5 steps) 565599955 (g) DVAE (initial) 6 554921 (h) DVAE (5 steps)\nFigure 8: Interpolating between Google Street View house numbers using (a-d) a VAE and (e-h) a. DVAE. The top rows (a, c, e, g) for each house number are the original interpolations, whilst the. second rows (b, d, f, h) are the result of 5 steps of MCMC sampling. If the original interpolation. produces symbols that do not resemble numbers, as observed in (a) and (e), the models will attempt. to move the samples towards more realistic numbers (b, f). Interpolation between 1- and 2-digil numbers in an image (c, g) results in a meaningless blur in the middle of the interpolation. After a. few steps of MCMC sampling the models instead produce more recognisable 1- or 2-digit numbers. (d, h). We note that when the contrast is poor, denoising models in particular can struggle to recover. meaningful images (h)."}] |
rJbPBt9lg | [{"section_index": "0", "section_name": "NEURAL CODE COMPLETION", "section_text": "Chang Liu*, Xin Wang*, Richard Shin, Joseph E. Gonzalez, Dawn Sor University of California. Berkeley\nCode completion, an essential part of modern software development, yet can be. challenging for dynamically typed programming languages. In this paper we ex-. plore the use of neural network techniques to automatically learn code completion from a large corpus of dynamically typed JavaScript code. We show different. neural networks that leverage not only token level information but also structural. information, and evaluate their performance on different prediction tasks. We. demonstrate that our models can outperform the state-of-the-art approach, which. is based on decision tree techniques, on both next non-terminal and next terminal. prediction tasks by 3.8 points and 0.5 points respectively. We believe that neural. network techniques can play a transformative role in helping software developers. manage the growing complexity of software systems, and we see this work as a. first step in that direction."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "As the scale and complexity of modern software libraries and tools continue to grow, code comple tion has become an essential feature in modern integrated development environments (IDEs). By suggesting the right libraries, APIs, and even variables in real-time, intelligent code completion en. gines can substantially accelerate software development. Furthermore, as many projects move tc dynamically typed and interpreted languages, effective code completion can help to reduce costly errors by eliminating typos and identifying the right arguments from context..\nHowever, existing approaches to intelligent code completion either rely on strong typing (e.g., Visua. Studio for C++), which limits their applicability to widely used dynamically typed languages (e.g. JavaScript and Python), or are based on simple heuristics and term frequency statistics which are often brittle and are relatively error-prone. In particular, Raychev et al.(2016a) proposes the state-of the-art probabilistic model for code, which generalizes both simple n-gram models and probabilisti. grammar approaches. This approach, however, examines only a limited number of elements in th. source code when completing the code. Therefore, the effectiveness of this approach may not scal. well to large programs.\nIn this paper we explore the use of deep learning techniques to address the challenges of code com. pletion for the widely used and dynamically typed JavaScript programming language. We formulate. the code completion problem as a sequential prediction task over the traversal of a parse-tree struc ture consisting of both non-terminal structural nodes and terminal nodes encoding program text. We. then present simple, yet expressive, LSTM-based (Hochreiter & Schmidhuber(1997)) models that leverage additional side information obtained by parsing the program structure..\nCompared to widely used heuristic techniques, deep learning for code completion offers the op. portunity to learn rich contextual models that can capture language and even library specific code patterns without requiring complex rules or expert intervention..\nWe evaluate our recurrent neural network architecture on an established benchmark dataset for the JavaScript code completion. Our evaluations reveal several findings: (1) when evaluated on short programs, our RNN-based models can achieve better performance on the next node prediction tasks compared to the prior art (Bielik et al.(2016); Raychev et al.(2016a)), which are based on decision- tree models; (2) our models' prediction accuracies on longer programs, which is provided in the test set, but were not evaluated upon by previous work, are better than our models' accuracies on shorter\n*The first and second authors contributed equally and are listed in an alphabetical order\n/*****/ // Return the exports of the module /*****/ return module.exports; /***/ } /******/ // expose the modules object (_webpack_modules_ /******/ webpack_require_.m = modules; /******/ // expose the module cache /******/ webpack_require_.c = installedModules; /******/ //_webpack_public_path /****/ -webpack_require_ - c(_webpack_require) mwebpack require apply[Object, optional] thisArg, [.. arguments (Function) bind([T] thisArg, [...*, optional] .. arity (several definitions, call([Object, optional] thisArg, [.... caller (Function) constructor (Object) hasOwnProperty([string] propertyNam.. Press to choos th electe oist s Airs t ghstio nd nserta do earas estion and in\n/***/ return module.exports; /******/ } /*****/ // expose the modules object _webpack_modules /******/ webpack_require_.m = modules; /*****/ // expose the module cache /* webpack_require_.c = installedModules; /**** // webpack_public_path /******/ webpack_require c(_webpack_require) m_webpack_require_ apply[Object, optional] thisArg, [.. arguments (Function) bind([T] thisArg, [...*, optional] .. arity (several definitions) call([Object, optional] thisArg, [.... caller (Function) constructor (Object) hasOwnProperty([string] propertyNam..\nFigure 1: Code Completion Example in In telliJ IDEA\nprograms; and (3) in the scenario that the code completion engine suggests a list of candidates, our. RNN-based models allow users to choose from a list of 5 candidates rather than inputting manually for over 96% of all time when this is possible.\nIn this section. we first introduce the problem of code completion and its challenges. Then we explain abstract syntax trees (AST), which we use as the input for our problems. Lastly, we formally define the code completion problem in different settings as several prediction problems based on a. partial AST.\nCode completion is a feature in some integrated development environments (IDEs) to speed up programmers' coding process. Figure[1|demonstrates this feature in IntelliJ IDEA['| In this example,\nL function webpack_require_ (moduleId){ .. /******/ // expose the modules object ( webpack_modules_ return module.exports; webpack_require_.m = modules; 7 /**** // expose the module cache webpack_require_.c = installedModules; /**** webpack_public_path webpack_require_.m = modules; // /******/ webpack_require c (_webpack_require webpack_require_.c = installedModules; apply([object, optional] thisArg, l. arguments (Function) bind([T] thisArg, [...*, optional]. webpack_require arity eral definitions p: 0.9464 call([object, optional] thisArg, I... C : 0.0300 caller (Function) constructor (Object) m: 0.0061 hasOwnProperty ([string] propertyNam...\nFigure 2: Correct prediction of the program in Figure.\nThese promising results encourage more investigation into developing neural network approaches for the code completion problem. We believe that our work not only highlights the importance of the field of neural network-based code completion, but is also an important step toward neural network-based program synthesis.\nRaychev et al.[(2014) and|White et al.[(2015) explore how to use recurrent neural networks (RNNs) to facilitate the code completion task. However, these works only consider running RNNs on top of a token sequence to build a probabilistic model. Although the input sequence considered in Raychev et al.[(2014) is produced from an abstract object, the structural information contained in the abstract syntax tree is not directly leveraged by the RNN structure in both of these two works. In contrast, we consider extending LSTM, a RNN structure, to leverage the structural information directly for. the code prediction task\na part of a JavaScript program has been input to the IDE. When the dot symbol (i.e., \"\") is addec. after --webpack require--, the IDE prompts with a list of candidates that the programmer is most likely to input next. When a candidate matches the intention, the programmer can choose i from the list rather than typing it manually. In this work, we define the code completion problenr. as predicting the next symbol while a program is being written. We consider this problem as ar important first step toward completing an entire program.\nTraditional code completion techniques are developed by the programming language community. to leverage context information for prediction. For example, when a programmer writes a Java program and inputs a variable name and then a dot symbol, the code completion engine will analyze the class of the variable and prompt the members of the class. In programming language literature. such information is referred to as type information. Statically typed languages, such as C and Java.. enforces type checking at static time, so that the code completion engine can take advantage of full. type information to make prediction without executing the code..\nIn recent years, dynamically typed languages, such as Python or JavaScript, have become increas-. ingly popular. In these languages, type checking is usually performed dynamically while executing. a program. Thus, type information may be only partially available to the code completion engine while the programmer is writing the code. Despite their popularity, the dynamic typing of these. languages makes code completion for them challenging. For example, in Figure[1] the next symbol. to be added is p. This symbol does not appear in the previous part of the program, and thus the code completion engine in IntelliJ IDEA IDE cannot prompt with this symbol..\nHowever, this challenge may be remedied by leveraging a large corpus of code, a.k.a., big code In fact, --webpack requi re--.p is a frequently used combination appearing in many programs on Github. com, one of the largest repositories of source code. Therefore, a code completion engine powered by big code is likely to learn this combination and to prompt p. In fact, our methods discussed in later sections can predict this case very well (Figure|2),.\nRegardless of whether it is dynamically typed or statically typed, any programming language has ar. unambiguous context free grammar (CFG), which can be used to parse source code into an abstrac. syntax tree (AST). Further, an AST can be converted back into source code easily. Therefore we consider the input of our code completion problem as an AST, which is a typical assumption made. by most code completion engines.\nAn AST is a rooted tree. In an AST, each non-leaf node corresponds to a non-terminal in the CFG specifying structure information. In JavaScript, non-terminals may be Expres sionSt atement,. ForStatement, IfStatement, SwitchStatement, etc. Each leaf node corresponds to a terminal in the CFG encoding program text. There are infinite possibilities for terminals. They can. be variable names, string or numerical literals, operators, etc..\nFigure [3|illustrates a part of the AST of the code snippet in Figure[1] In this tree, a node without a surrounding box (e.g., ExpressionStatement, etc.) denotes a non-terminal node. A node embraced by an orange surrounding box (e.g., insta1ledModules) denotes a terminal node. At the bottom of the figure, there is a non-terminal node P ropert y and a terminal node p. They have not been observed by the editor, so we use green to indicate this fact. Note that each non-terminal has at most one terminal as its child.\nIn this work, we consider the input to be a partial AST, and the code completion problem is to predic the next node given the partial AST. In the following, we first define a partial AST, and then present the code completion problems in different scenarios.\nIn a traditional code completion engine, the AST can be further processed by a type checker so that. type information will be attached to each node. In this work, however, we focus on dynamically. yped languages, and type information is not always available. Therefore, we do not consider the type information provided by a compiler, and leave it for our future work..\nExpressionStatement ExpressionStatement Partial AST AssignmentStatement AssignmentStatement MemberStatement MemberStatement - Identifier Identifier webpack_require webpack_require Propertye Propertye C c - Identifier installedModules Identifier installedModules 1 ExpressionStatement ExpressionStatement AssignmentStatement AssignmentStatement MemberStatement MemberStatement Identifier Identifier Right-most node _ webpack_require_ webpack_require Property Next node following. Property p the partial AST Figure 3: AST example (part) Figure 4: Partial AST example\nInput: a partial AST. Given a complete AST T, we define a partial AST to be a subtree T. of T, such that for each node n in T', its left set LT(n) with respect to T is a subset of T', i.e.. LT(n) C T'. Here, the left set LT(n) of a node n with respect to T is defined as the set of all nodes in the in-order sequence during the depth-first search of T that are visited earlier than n .\nUnder this definition, in each partial AST T', there exists the right-most node nR, such that all other nodes in T' form its left set LT(nR). The next node in the in-order depth-first search visiting sequence after nR is also the first node not appearing in T'. We call this node the next node following the partial AST. Figure4jillustrates these concepts using the example in Figure[3] In the rest of the paper, we also refer to a partial AST as a query.\nNext node prediction. Given a partial AST, the next node prediction problem, as suggested b its name, is to predict the next node following the partial AsT. Based on the node's kind, i.e whether its a non-terminal node or a terminal one, we can categorize the problem into the next non terminal prediction problem and the next terminal prediction problem. Although the next termina prediction problem may sound more interesting, the next non-terminal prediction problem is als important, since it predicts the structure of the program. For example, when then next non-termina is ForStatement, the next token in the source program is the keyword for, which does not hav a corresponding terminal in the dataset. In this case, a model able to predict the next non-termina can be used by the code-completion engine to emit the keyword for. These two tasks are als the same problems considered by previous works employing domain specific languages to achiev heuristic-based code completion (Raychev et al.(2016b);Bielik et al.(2016)).\nPredicting the next node versus predicting the next token. A natural alternative formulatio of the problem is predicting the next token given the token sequence that has been inputted so far Such a formulation, however, does not take advantage of the AST information, which is very easy t acquire with a suitable parser. Predicting the next node allows taking advantage of such informatio to enable more intelligent code completion..\nIn particular, predicting the next non-terminal allows completing the structure of a code block rather than a single (keyword) token. For example, when the next token is a keyword for, the correspond. ing next non-terminal is ForSt atement, which corresponding to the following code block:\nfor( // for-loop body\nIn this case, successfully predicting the next non-terminal node allows completing not only the nex key token for, but also tokens such as (, ;, ), {, and }. Such structure completion enabled by predicting the next non-terminal is more compelling in modern IDEs.\nPredicting the next terminal node allows completing identifiers, properties, literals, etc., which is. similar to the next token prediction. However, predicting the next terminal node can leverage the information of the predicting node's non-terminal parent, indicating what is being predicted, i.e., an. identifier, a property, or a literal, etc. For example, when completing the following expression:.\nthecode completion enginewithASTinformation Will predictaproperty Of _webpack_require_, while the engine without AsT information only learns two tokens -webpack_require. and a dot \".\" and tries to predict the next token without any constraint.. In our evaluation, we show that by leveraging the information from the non-terminal parent can significantly improve the performance.\nIn this work, we focus on the next node prediction task, and leave the comparison with next token prediction as our future work\nJoint prediction. A more important problem than predicting only the next non-terminal or termi na1 itself is to predict the next non-terminal and terminal together. We refer to this task to predic both next non-terminal and terminal as the joint prediction problem. We hope code completion can. be used to generate the entire parsing tree in the end, and joint prediction is one step further toward. this goal than next node prediction..\nFormally, the joint prediction problem that we consider is that, given a partial AST whose following node is a non-terminal one, we want to predict both the next non-terminal and the next terminal.\nThere may be non-terminal nodes which do not have a terminal child (e.g., the As signment Statement). In this case, we artificially add an EMPTY terminal as its child. Note that this treatment is the same as in|Bielik et al.(2016). We count it as a correct prediction if both the next non-terminal and terminal are predicted correctly.\nDenying prediction. There may be infinite possibilities for terminals, so it is impossible to predic all terminals correctly. We consider an alternative scenario that, when it thinks that the programmer will input a rare terminal, the code completion engine should have the ability to identify this case and deny predicting the next node(s).\nIn our problem, we build a vocabulary for frequent terminals. All terminals not in this vocabulary. are considered as an UNK terminal. In this case, when it predicts UNK for the next terminal, the code completion model is considered as denying prediction. Since non-terminals' vocabulary size is very. small, denying prediction is only considered for the next terminal prediction, but not for the next. non-terminal prediction."}, {"section_index": "2", "section_name": "4 MODELS", "section_text": "In this section, we present the basic models considered in this work. In particular, given a partia AST as input, we first convert the AST into its left-child right-sibling representation, and serializ it as its in-order depth first search sequence. Thus, we consider the input for the next non-termina prediction as a sequence of length k, i.e., (N1, T), (N2, T2), ..., (Ng, T). Here, for each i, N, i a non-terminal, and T, is the terminal child of N,. For each non-terminal node N,, we encode no only its kind, but also whether the non-terminal has at least one non-terminal child, and/or one right sibling. In doing so, from an input sequence, we can reconstruct the original AST. This encoding i also employed byRaychev et al.(2016a). We refer to each element in the sequence (e.g., (N,, T) as a token. As mentioned above, a non-terminal without a terminal child is considered to have ar EMP TY child.\nThis input sequence (N1, T), (N2, T2), ..., (Ng, T) is the only input for all problems except the. next terminal prediction. For the next terminal prediction problem, besides the input sequence, we also have the information about the parent of the current predicting terminal, which is a non-terminal. i.e., Nk+1\nThroughout the rest of the discussion, we assume that both N, and T; employ one-hot encoding The vocabulary sets of non-terminals and terminals are separate."}, {"section_index": "3", "section_name": "4.1 NEXT NON-TERMINAL PREDICTION", "section_text": "Given an input sequence, our first model predicts the next non-terminal. The architecture is il lustrated in Figure 5 We refer to this model as NT2N, which stands for using the sequence of\nFigure 5: Architecture (NT2N) for predicting the next non-terminal\nEmbedding non-terminal and terminal.. Given an input sequence, the embedding of each token\nE, = AN + BT\nLSTM layer. Then the embedded sequence is fed into a LSTM layer to get the hidden state. In. particular, a LSTM cell takes an input token and a hidden state h-1, Cy-1 from the previous LSTM. cell as input, computes a hidden state h, c, and outputs h, based on the following formulas:\nq P J,2J 0 q tan C;=fOCi-1+qO g h; = o O tanh(ci)\nq 0 f Xi P J,2J 0 9 fant C;=fOC;-1+qO 9 h, = o O tanh(c\nHere, P J.2.1 denotes a J 2J parameter matrix, where J is the size of the hidden state, i.e. dimension. of hi, which is equal to the size of embedding vectors. and O denote the sigmoid function and pointwise multiplication respectively.\nSoftmax layer. Assume hg is the output hidden state of the last LSTM cell. hx is fed into a softmax classifier to predict the next non-terminal. In particular, we have.\nNe+1 = softmax(Wy hk + bN\ne+1 = softmax(Wy hk + bz\nwhere Wy and by are a matrix of size Vv J and a Vy-dimensional vector respectively\nUsing only non-terminal inputs. One variant of this model is to omit all terminal information. from the input sequence. In this case, the embedding is computed as E, = AN,. We refer to this. model as N2N, which stands for using Non-terminal sequence TO predict the next Non-terminal.\nTk+1 = softmax(Wr hk + bT)\nsormax h2 WN Prediction h1 hk Nk+1 ho LTTT h1 LSTM nk LTTM 188 88 88 Co C1 Ck-1 AN+BT1 AN2+BT2 ANk+BTk N1 T N2 T2 NK Tk\nsornax WN Prediction h1 h2 hk Nk+1 ho LSTM h1 LSTM hk-1 LSTM Co C1 Ck-1 AN+BT1 AN2+BT2 ANx+BTk N1 T1 N2 T2 NK Tk\nFigure 6: Architecture (NTN2T) for predicting the next terminal\nwhere WT and bT are a matrix of size VT J and a VT-dimensional vector respectively. In this case, the loss function has an extra term to give supervision on predicting T. We refer to this model. as NT2NT, which stands for using the sequence of Non-terminal and Terminal pairs TO predict the next Non-terminal and Terminal pair..\nIn the next terminal prediction problem, the partial AsT does not only contain. (N,T),..., (Ne,T), but also Ne+1. In this case, we can employ the architecture in Fig. ure|6|to predict Tk+1. In particular, we first get the LSTM output hx in the same way as in NT2N The final prediction is based on.\nTk+1 = softmax(WThk + WnTNk+1 + bT)\nvhere WyT is a matrix of size VT Vv, and WT and bT are the same as in NT2NT. We refe o this model as NTN2T, which stands for Non-terminal and Terminal pair sequence and the nex Von-terminal TO predict the next Terminal..\nNote that the model NT2NT can also be used for the next terminal prediction task, although the non-terminal information Ne+1 is not leveraged. We will compare the two approaches later."}, {"section_index": "4", "section_name": "4.3 JOINT PREDICTION", "section_text": "We consider two approaches to predict the next non-terminal and the next terminal together. The first approach is NT2NT, which is designed to predict the two kinds of nodes together.\nAn alternative approach is to (1) use a next non-terminal approach X to predict the next non. terminal; and (2) feed the predicted non-terminal and the input sequence into NTN2T to predict the next terminal. We refer to such an approach as X+NTN2T."}, {"section_index": "5", "section_name": "4.4 DENYING PREDICTION", "section_text": "We say a model denies prediction when it predicts the next terminal to be UNK, a special terminal to substitute rare terminals. However, due to the large amount of rare terminals. the occurrences of UNK may be much greater than any single frequent terminals. In this case, a model that can deny prediction may tend to predict UNKs, and thus may predict for fewer queries than it should\nTo mitigate this problem, we modify the loss function to be adaptive. Specifically, training a machine learning model fe is to optimize the following objective:\nwhere {(qi, yi)} is the training dataset consisting pairs of a query qi and its ground truth next token yi. l is the loss function to measure the distance between the prediction y; = fe(q) and the ground\nh1 h2 hk WN Softmax Prediction Tk+1 ho LSTM h1 LSTM LSTM Co C1 Ck-1 WNT AN+BT AN+BT2 ANx+BTk Nk+1 N1 T1 N2 T2 NK TK\nh1 h2 hk WN Softmax Prediction Tk+1 ho LSTM h1 LSTM hk-1 LTTM Co C1 Ck-1 WNT AN+BT AN+BT2 ANx+BTk NK+1 N1 T1 N2 T2 NK TK\nargming l(fo(qi),yi) 2\nTraining set Test set Overall Programs 100,000 Programs 50,000 Non-terminal 44 Queries 1.7 108 Queries 8.3 107 Terminal 3.1 10\ntruth yi. We choose l to be the standard cross-entropy loss. We introduce a weight a; for eac. sample (q, y) in the training dataset to change the objective to be as follows:.\nargmin Qil(fo(qi),Yi\nWhen training a model not allowed to deny prediction, we set a; = 0 for y = Unk, and a; = otherwise. In doing so, it is equivalent to remove all queries whose ground truth next token is UNK\nWhen training a model that allows denying prediction, we set all a; to be 1. To denote this case, we put a notation \"+D\"' at the end of the model, (e.g., NT2NT+D, etc.).."}, {"section_index": "6", "section_name": "5.1 DATASET", "section_text": "We use the JavaScript datasel?|provided byRaychev et al.(2016b) to evaluate different approaches The statistics of the dataset can be found in Table 1Raychev et al.(2016a) provides an approach. called PHOG, for the next token prediction. The reported accuracy results are based on a subse. of 5.3 107 queries from the full test set. Specifically,Raychev et al.[(2016a) chose all queries ir each program containing fewer than 30,o00 tokens||when we compare with their results, we use the same testset. Otherwise, without a special specification, our results are based on the full test se consisting of 8.3 107 queries."}, {"section_index": "7", "section_name": "5.2 TRAINING DETAILS", "section_text": "Vocabulary In our dataset, there are 44 different kinds of non-terminals. Combining two more. bits of information to indicate whether the non-terminal has a child and/or a right sibling, there. are at most 176 different non-terminals. However, not all such combinations are possible: a. ForStatement must have a child. In total, the vocabulary size for non-terminals is 97. For terminals, we sort all terminals in the training set by their frequencies. Then we choose the 50,o00 most frequent terminals to build the vocabulary. We further add three special terminals: UNK for out- of-vocabulary tokens, EOF indicating the end of program, and Empty for the non-terminal which. does not have a terminal. Note that about 45% terminals in the dataset are Empt y terminals..\nWe divide each program into segments consisting of s consecutive tokens. The last segment of a pro gram, which may not be full, is padded with (EOF) tokens. We coalesce multiple epochs together. We organize all training data into 6 buckets. In each epoch, we randomly shuffle all programs in the training data to construct a queue. Whenever a bucket is empty, a program is popped from the queue and all segments of the program are inserted into the empty bucket sequentially. When the queue becomes empty, i.e., the current epoch finishes, all programs are re-shuffled randomly to re- construct the queue. Each mini-batch is formed by b segments, i.e., one segment popped from each bucket. When the training data has been shuffled for e = 8 times. i.e.. e epochs are inserted into the\nTraining details. We use a single layer LSTM network with hidden unit size of 1500 as our base model. To train the model, we use Adam (Kingma & Ba (2014)) with base learning rate 0.001. The learning rate is multiplied by 0.9 every 0.2 epochs. We clip the gradients' norm to 5. The batch size is b = 80. We use truncated backpropagation through time, by unrolling the LSTM model s = 50 times to take an input sequence of length 50 in each batch (and therefore each batch contains b s = 4000 tokens).\nFigure 7: Training epoch illustration\nTable 2: Next non-terminal prediction results\nbucket, we stop adding whole programs, and start adding only the first segment of each program when a bucket is empty, a program is chosen randomly, and its first segment is added to the bucket We terminate the training process when all buckets are empty at the same time. That is, all programs from the first 8 epochs have been trained. This is illustrated in Figure7\nThe hidden states are initialized with ho, co, which are two trainable vectors. The hidden states of LSTM from the previous segment are fed into the next one as input if both segments belong to the same program. Otherwise, the hidden states are reset to be ho, co. We observe that resetting the hidden states for every new program improves the performance a lot..\nWe initialize all parameters in ho, co to be O. All other parameters are initialized with values uni formly randomly sampled from [-0.05, 0.05]. For each model, we train 5 sets of parameters using different random initializations. We evaluate the ensemble of the 5 models by averaging 5 softmax outputs. In our evaluation, we find that the ensemble improves the accuracy by 1 to 3 points in general.\nIn this section, we present the results of our models on next node prediction, and compare them witl. the counterparts in Bielik et al.(2016), which is the state-of-the-art on these tasks. Therefore, we. use the same testset consisting of 5.3 10' queries as in Bielik et al.(2016). In the following, we. first report results of next non-terminal prediction and of next terminal prediction, then evaluate ou. considered models' performance on programs with different lengths..\nNext non-terminal prediction. The results are presented in Table 2 From the table, we can observe that both NT2N and NT2NT can outperform Raychev et al.(2016a). In particular, an ensemble of 5 NT2N models improvesRaychev et al. 2016a) by 3.8 percentage points.We also report the average accuracies of the 5 single models and the variance among them. We observe that the variance is very small, i.e., 0.1% - 0.2%. This indicates that the trained models' accuracies are robust to random initialization.\nAmong the neural network approaches, NT2NT's performance is lower than NT2N, even giver. that the former is provided with more supervision. This shows that given the limited capacity o the model, it may learn to trade off non-terminal prediction performance in favor of the termina. prediction task it additionally needs to perform.\nTable 4: Next token prediction on programs with different lengths\nNext terminal prediction. The results are presented in Table 3] We observe that an ensemble of 5 NTN2T models can outperform Raychev et al.[(2016a) by 0.5 points. Without the ensemble. its accuracies are around 82.1%, i.e., 0.8 points less than Raychev et al.[(2016a). For the 5 single. models, we also observe that the variance on their accuracies is also very small, i.e., 0.1%. On. the other hand, we observe that NT2NT has much worse performance than NTN2T, i.e., by 4.8 percentage points. This shows that leveraging additional information about the parent non-terminal. of the current predicting terminal can improve the performance significantly..\nPrediction accuracies on programs with different lengths.. We examine our considered models performance over different subsets of the test set. In particular, we consider the queries in programs containing no more than 30,000 tokens, which is the same as used in Bielik et al. (2016); Raychev. et al.(2016a). We also consider the rest of the queries in programs which have more than 30,000 tokens. The results are presented in Table4.\nWe can observe that for both non-terminal and terminal prediction, accuracies on longer programs are higher than on shorter programs. This shows that a LSTM-based model may become more accurate when observing more code inputted by programmers..\nWe also report top 5 prediction accuracy. We can observe that the top 5 accuracy improves upon top 1 accuracy dramatically. This metric corresponds to the code completion scenario that an IDE may pop up a list of few (i.e., 5) candidates for users to choose from. In particular, NT2N can achieve 99.1% top-5 accuracy on the non-terminal prediction task. On the other hand, NTN2T can alsc achieve 89.2% accuracy on the terminal prediction task. In the test set, there are 7.4% of tokens in the data whose ground truth is UNk, i.e., non-top 50,o00 most frequent tokens. This means that NTN2T can predict over 89.2/(100 - 7.4)% = 96.3% of all tokens whose ground truth is not UNK. Therefore, this means that the users can choose from the popup list without typing the toker manually over 96% of all time that the code completion is possible if the completion is restricted to the top 50,000 most frequent tokens in the dataset.\nThe effectiveness of different UnK thresholds. We evaluate the effectiveness of how to choose the threshold to cut for UNK terminals on the accuracy. We randomly sample 1/10 of the training dataset and the test dataset and vary the thresholds to cut for UNK terminals from 10000 to 80000 We plot the percentage of uNK terminals in both the full test set and its subset in Figure 8| We can observe that the distributions of unK terminals are almost the same in both sets. Further, when the threshold is 10000, i.e., all terminals out of the top 10000 most frequent ones are turned into UNKs, there are more than 11% UNK queries (i.e., queries with ground truth being UNK) in the test set. When the threshold increases to 50o00 or more, this number drops to 7% to 6%. The variance of the UNK queries' percentages is not large when threshold of uNK is varied from 50000 to 80o00.\nTable 3: Next terminal prediction results\nNon-terminal Terminal N2N NT2N NT2NT NTN2T NT2NT Top 1 accuracy Short programs (<30,000 non-terminals) 82.3% 87.7% 86.2% 83.4% 78.6% Long programs (>30,000 non-terminals) 87.7% 94.4% 92.7% 89.0% 85.8% Overall 84.2% 90.1% 88.5% 85.4% 81.2% Top 5 accuracy Short programs (<30,000 non-terminals) 97.9% 98.9% 98.7% 87.9% 86.4% Long programs (>30,000 non-terminals) 98.8% 99.6% 99.4% 91.5% 90.5% Overall 98.2% 99.1% 98.9% 89.2% 87.8%\nPercentage of UNK tokens vs UNK threshold 12 11 10 9 8 7 6 1 2 3 4 5 6 7 8 UNK threshold (Unit: 10,000) -Entire dataset -Sampled subset\nFigure 8: Percentage of UNK tokens in the entire test data and the sampled subset of the test data by varying the UNK threshold from 10000 to 80000.\nWe train one NTN2T model for each threshold, and evaluate it using the sampled test set. The accuracies of different models are plotted in Figure [9] The trend of different models' accuracies is similar to the trend of the percentage of non-UNK tokens in the test set. This is expected, since when the threshold increases the model has more chance to make correct predictions for origina UNK queries. However, we observe that this is not always the case. For example, the accuracies o. models trained with thresholds being 30o00 and 40o00 are almost the same, i.e., the difference is only 0.02%. We make similar observations among the models trained with thresholds being 60000 70000. and 80oo00. Notice that we have observed above that when we train 5 models with differen random initialization, the variance of the accuracies of these models is within O.1%. Therefore, we conclude that when we increase the UNK threshold from 30000 to 40000 and from 60000 to 80000 the accuracies do not change significantly. One potential explanation is that when increasing the UNK threshold, while it has more chance to predict those otherwise UNK terminals, a model may also more likely make mistakes when it needs to choose the next terminal from more candidates"}, {"section_index": "8", "section_name": "5.4 JOINT PREDICTION", "section_text": "In this section, we evaluate different approaches to predict the next non-terminal and terminal to- gether for the joint prediction task. In fact, NT2NT is designed for this task. Alternative approaches can predict the next non-terminal first, and then predict the next terminal based on the predicted next non-terminal. We choose NTN2T method as the second step to predict the next terminal, and we examine two different approaches as the first step to predict the next non-terminal: N2N and NT2N. Therefore, we compare three methods in total.\nThe top 1 accuracy results are presented in Table 5. N2N+NTN2T is less effective thar NT2N+NTN2T, as expected, since when predicting the non-terminal in the first step, N2N is less effective than NT2N as we have shown in Table 4] On the other hand, NT2NT's performance is better than N2N+NTN2T, but is worse than NT2N+NTN2T.\nWe observe that for all these three combinations. we have\nPr(Tk+1 Tk+1A Nk+1 Ne+1) > Pr(Tk+1 Tk+1)Pr(Nk+1 Nk+1\nThese facts indicate that the events of the next non-terminal and terminal being predicted correctly are not independent, but very relevant to each other instead. This is also the case for NT2NT, ever though NT2NT predicts the next non-terminal and the next terminal independently conditional upor the LSTM hidden states.\nAccuracy vs UNK threshold 75.5 75 Acrnnnev 74.5 74 73.5 73 1 2 3 4 5 6 7 8 UNK threshold (Unit: 10,000)\nFigure 9: Accuracies of different models. trained over the sampled subset of train- ing data by varying the UNK threshold from. 10000 to 80000.\nTable 5: Predicting non-terminal and terminal together\nTable 6: Deny prediction results. Top 1 accuracy is computed as the percentage of all querie (including the ones whose ground truth is UNK) that can be predicted correctly, i.e., the predictio matches the ground truth even when the ground truth is UNK. Accuracy on non-UNK terminal measures the accuracy of each model on all non-UNK terminals. Deny rate is calculated as th percentage of all queries that a model denies prediction. Prediction accuracy is the top 1 accurac over those queries that a model does not deny prediction, i.e., the prediction is not UNK.\nAccuracy vs a 82 80 ACeunrey 78 76 -non-UNK terminals 74 Overall 72 70 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 a\nFigure 10: Overall accuracies and accuracies on non-UNK terminals by varying a"}, {"section_index": "9", "section_name": "5.5 DENYING PREDICTION", "section_text": "We compare the models which do not deny prediction (i.e., NT2NT and NTN2T) and those which. do (i.e., NT2NT+D and NTN2T+D). Results are presented in Table[6] For a reference, in the test set. there are 7.42% unK queries. We can observe that deny prediction models (i.e., +D models) have. higher accuracies than the corresponding original models. This is expected. Since deny prediction. models allow predicting UNK terminals, while NT2NT and NTN2T fail on all UNK queries, +D will succeed on most of them. We further evaluate the accuracy on non-UNK terminals. One may expect. that since +D models may prefer to predict UK, a standard model should have a higher accuracy. on non-UNK terminals than its deny prediction counterpart. The results show that this is indeed the case, but the margin is very small, i.e., 0.1% for NT2NT and 0.3% for NTN2T. This means that. allowing denying prediction does not necessarily sacrifice a model's ability on predicting non-UNK terminals.\nWe are also interested in how frequent a +D model will deny prediction. We can observe that NTN2T+D will deny prediction for only 6.1% of all queries, which is even less than the percentage. of UNK queries (i.e., 7.42%). This shows that although we allow the model to deny prediction, it. is conservative when executing this privilege. This partially explains why NTN2T+D's accuracy on. non-UNK terminals is not much less than NTN2T's..\nWe plot both overall accuracies and accuracies on non-UNK terminals in Figure[10] We observe the same effect as above: 1) the overall accuracy for a = 1 is 6% higher than the one for a = 0; and 2) the accuracy on non-UNK terminals for = 1 is less than the one for = 0, but the margin is not large (i.e., less than 1%). When we increase a from 0 to 0.3, we can observe that the overall accuracy steeply increases. When we further increase a, however, the overall accuracy becomes steady. This is also the case for accuracy on non-UNK terminals. The result of this experiment\nEffectiveness of the value of a. We are interested in how the hyperparameter a in a +D model. affects its accuracy. We train 11 different NTN2T+D models on the 1/10 subset of the training set,. which is used above to examine the effectiveness of uNK thresholds, by varying a from 0.0 to 1.0. Notice that a = 0.0, this model becomes a standard NTN2T model..\nshows that how to set a is a trade-off between the overall accuracy and the accuracy on non-UNK terminals and how to choose a depends on the application.\nWe evaluate our models' runtime performance. Our models are implemented in TensorFlow (Abad et al.(2016)). We evaluate our models on a machine equipped with 16 Intel Xeon CPUs, 16 GB RAM, and a single GPU Tesla K80. All queries from the same program are processed incrementally That is, given two queries A, B, if A has one more node than B, then the LSTM outputs for B wil. be reused for processing A, so that only the additional node in A needs to be processed. Note tha this is consistent with the practice where programs are written incrementally from beginning to end. For each model, we feed in one query at a time into the model. There are 3939 queries in tota. coming from randomly chosen programs. We measure the overall response latency for each query. We observe that the query response time is consistent across all queries. On average, each mode. takes around 16 milliseconds to respond a query on GPU, and around 33 milliseconds on CPU. Note that these numbers are from just a proof of concept implementation and we have not optimizec. the code. Considering that a human being usually does not type in a token within 30 milliseconds. we conclude that our approach is efficient enough for potential practical usage. We emphasize tha these numbers do not directly correspond to the runtime latency when the techniques are deployec to a code completion engine, since the changes of AST serialization may not be sequential while. users are programming incrementally. This analysis, however, only provides an evidence to show. the feasibility of applying our approach toward a full-fledged code completion engine..\nIn this paper we introduce, motivate, and formalize the problem of automatic code completion. We describe LSTM-based approaches that capture parsing structure readily available in the code. completion task. We introduce a simple LSTM architecture to model program context. We then ex-. plore several variants of our basic architecture for different variants of the code completion problem. We evaluate our techniques on a challenging JavaScript code completion benchmark and compare. against the state-of-the-art code completion approach. We demonstrate that deep learning techniques. can achieve better prediction accuracy by learning program patterns from big code. In addition, we. find that using deep learning techniques, our models perform better for longer programs than for. shorter ones, and when the code completion engine can pop up a list of candidates, our approach. allows users to choose from the list instead of inputting the token over 96% of all time that this is possible. We also evaluate our approaches' runtime performance and demonstrate that deep code. completion has the potential to run in real-time as users type. We believe that deep learning tech. niques can play a transformative role in helping software developers manage the growing complexity. of software systems, and we see this work as a first step in that direction.."}, {"section_index": "10", "section_name": "REFERENCES", "section_text": "Miltiadis Allamanis and Charles Sutton. Mining idioms from source code. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, pp. 472 483. ACM. 2014.\nWe thank the anonymous reviewers for their valuable comments. This material is based upon work partially supported by the National Science Foundation under Grant No. TwC-1409915, and a DARPA grant FA8750-15-2-0104. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation and DARPA.\nI. Beltagy and Chris Quirk. Improved semantic parsers for if-then statements. In ACL, 2016\nPavol Bielik, Veselin Raychev, and Martin Vechev. PHOG: Probabilistic Model for Code. In ICMI 2016.\nXinyun Chen, Chang Liu, Richard Shin, Dawn Song, and Mingcheng Chen. Latent attention for if-then program synthesis. In NIPS, 2016.\nMichael Collins. Head-driven statistical models for natural language parsing. Computational lin guistics, 29(4):589-637, 2003.\nLi Dong and Mirella Lapata. Language to logical form with neural attention. In ACL, 2016\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8 1735-1780, 1997.\nPercy Liang, Michael I Jordan, and Dan Klein. Learning programs: A hierarchical bayesian ap proach. In Proceedings of the 27th International Conference on Machine Learning (ICML-10) pp. 639-646, 2010.\nTung Thanh Nguyen, Anh Tuan Nguyen, Hoan Anh Nguyen, and Tien N Nguyen. A statistical semantic language model for source code. In Proceedings of the 2013 9th Joint Meeting on. Foundations of Software Engineering, pp. 532-542. ACM, 2013.\nVeselin Raychev, Martin Vechev, and Eran Yahav. Code completion with statistical language models In ACM SIGPLAN Notices, volume 49, pp. 419-428. ACM. 2014\nVeselin Raychev, Pavol Bielik, and Martin Vechev. Probabilistic model for code with decision trees. In Proceedings of the 2016 ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications, pp. 731-747. ACM, 2016a.\nVeselin Raychev, Pavol Bielik, Martin Vechev, and Andreas Krause. Learning programs from noisy data. In POPL, 2016b."}] |
BysZhEqee | [{"section_index": "0", "section_name": "MARGINAL DEEP ARCHITECTURES: DEEP LEARNING FOR SMALL AND MIDDLE SCALE APPLICATIONS", "section_text": "Yuchen Zheng, Guoqiang Zhong & Junyu Dong\nIn recent years, many deep architectures have been proposed in different fields. However, to obtain good results, most of the previous deep models need a large. number of training data. In this paper, for small and middle scale applications, we. propose a novel deep learning framework based on stacked feature learning mod-. els. Particularly, we stack marginal Fisher analysis (MFA) layer by layer for the. initialization of the deep architecture and call it \"Marginal Deep Architectures'. (MDA). In the implementation of MDA, the weight matrices of MFA are first. learned layer by layer, and then we exploit some deep learning techniques, such. as back propagation, dropout and denoising to fine tune the network. To evalu-. ate the effectiveness of MDA, we have compared it with some feature learning. methods and deep learning models on 7 small and middle scale real-world ap. plications, including handwritten digits recognition, speech recognition, historical. document understanding, image classification, action recognition and so on. Ex-. tensive experiments demonstrate that MDA performs not only better than shallow feature learning models, but also state-of-the-art deep learning models in these. applications"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Deep learning methods have achieved desirable performance in many domains, such as image classi fication and detection, document analysis and recognition, natural language processing, video anal ysis (Krizhevsky et al.]2012} Chan et al.2014} Ciresan et al.]2010f Collobert & Weston2008 Le et al.|2011). Deep learning methods learn the data representation by using multiple processin, ayers, which discover the intricate structure of high dimensional data with multiple levels of ab straction (LeCun et al.]2015). For example, for face recognition, the learned features of first laye nay be the edges, directions and some local information. The second layer typically detects some bject parts which are combination of edges and directions. The higher layers may further abstrac the face image by combining the features of previous layers (outline of the eyes, nose, lips). This orocedure is very similar with human visual and perceptual system\nIn recently years, many deep learning methods have been proposed (1. Boureau & others. 2008 Lee et al.2 2009b a; Hinton & Salakhutdinov2006). However, most models meet some difficult. problems to solve, such as some parameters need to be randomly initialized, like the weight matrix of two successive layers in deep belief networks (DBNs) and the convolution kernel in convolutional neural networks (CNNs). In addition, traditional deep learning methods need a large scale training data to train the complex networks. It causes many problems in the training process. If we don't. initialize the parameters properly, the optimization procedure might need a long training time and. fall into local minima. Alternatively, many feature learning models have been proposed to learn the. intrinsic structures of high-dimensional data and avoid the curse of dimensionality. In particular most of them can be trained with small and middle scale of data and their learning algorithms. are generally based on closed-form solution or convex optimization. For instance, marginal Fisher analysis (MFA) (Yan et al.]2007]Zhong et al.]2013) is one of the feature learning models that is a supervised method based on the graph embedding framework. It utilizes an intrinsic graph. to characterize the intraclass compactness, and another penalty graph to characterize the interclass separability. Its optimal solution can be learned by generalized eigenvalue decomposition. However,"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "on the one hand, shallow feature learning models cannot work well on the data with highly nonlineat structure; on the other hand, few efforts are made to combine shallow feature learning models for the design of deep architectures.\nIn order to simultaneously solve the existing problems in deep learning methods and combine the. advantages of feature learning models, we proposed a novel deep learning method based on stacked. feature learning models. Particularly, we stack marginal Fisher analysis (MFA) layer by layer foi the initialization of the deep architecture and call it \"Marginal Deep Architectures\" (MDA). First ly, the input data are mapped to higher dimensional space by using random weight matrix. Then we use MFA to learn the lower dimensional representation layer by layer. In the implementatior. of this architecture, we add some tricks in the training process, such as back propagation, dropout. and denoising to fine tune the network. Finally, the softmax layer is connected to the last feature layer. We have compared our MDA with some feature learning methods and deep learning mod. els on different domains of datasets (including handwritten digits recognition, speech recognition. historical document understanding, image classification, action recognition and so on). Extensive experiments demonstrate that MDA performs not only better than shallow feature learning models. but also state-of-the-art deep learning models in small and middle scale applications..\nThe contributions of this work are highlighted as follows\n1. We propose a novel structure to build a deep architecture. The first hidden layer has twice or quadruple neurons as the input layer. Then we can use some feature learning models layer by layei to learn the compact representations of data. Finally, we set the last layer as a softmax classifier.\n2. Traditional deep learning models in general need a large scale training data. Compared with. traditional deep learning models, MDA can work better than traditional deep learning models in small and middle scale applications because the initialization of the weight matrices using MFA is much better than that using random initialization..\n3. Our MDA can work well in different domains of datasets, such as handwritten digits, spoken letters and natural images. Extensive experiments demonstrate that MDA is a general model to handel small and middle scale data. On the other hand, for large scale datasets, like CIFAR-10 MDA works comparatively with other deep learning methods.\nThe rest of this paper is organized as follows: In Section2 we give a brief overview of related work. In Section[3] we present the marginal Fisher analysis (MFA) and the proposed marginal deep architectures (MDA) in detail. The experimental settings and results are reported in Section[4] while Section5]concludes this paper with remarks and future work.\nWith the development of deep learning methods, many deep networks have been proposed in recent years (Donahue et al.[2013} Krizhevsky et al.[2012} Long et al.]2015}Zhou et al.] 2014). These deep learning models show their powerful performance in various fields, such as image classification and analysis, document analysis and recognition, natural language processing et al. In the area of image analysis, Hinton et al. proposed a large, deep convolutional neural network (Alex net) to classify the 1.2 million high-resolution images in the ImageNet. It uses efficient GPU to speed their method. The results show that a large, deep convolutional neural network is capable of achieving recordbreaking results on a highly challenging dataset using purely supervised learning (Krizhevsky et al.|2012). In order to popularize the deep convolutional neural network, Donahue ea al. proposed DeCAF (Deep Convolutional Activation Feature) which is trained in a fully supervised fashion on a large, fixed set of object recognition tasks (Donahue et al.]2013). DeCAF provides a uniform framework for researchers who can improve and change this framework on some specific tasks However, its performance at scene recognition has not attained the same level of success. In order to handle this problem, Zhou et al. introduce a new scene-centric database called Places with over 7 million labeled pictures of scenes. Then, they learn the deep features for scene recognition tasks by using the same architecture as ImageNet, and establish new state-of-the-art results on several scene- centric datasets (Zhou et al.2014). However, these methods based on convolutional operation need very large scale training samples and a long training time. They can not work well on small and middle scale applications.\nIn other domains, deep learning methods also achieve good performance. Hinton et al. represent. the shared views of four research groups that have had recent successes in using DNNs for automat-. ic speech recognition (ASR). The DNNs that contain many layers of nonlinear hidden units and a. very large output layer can outperform Gaussian mixture models (GMMs) at acoustic modeling for. speech recognition on a variety of data sets (Hinton et al.]2012a). In the area of genetics, Xiong. et al. use \"deep learning\" computer algorithms to derive a computational model that takes as input. DNA sequences and applies general rules to predict splicing in human tissues (Xiong et al.]2015). It reveals the genetic origins of disease and how strongly genetic variants affect RNA splicing. In. the area of natural language understanding, deep learning models have delivered strong results on. topic classification, sentiment analysis et al. Sutskever et al. proposed a general approach, the Long. Short-Term Memory (LSTM) architecture which can solve the general sequence to sequence prob-. lems better than before (Sutskever et al.]2014). In addition, Hinton et al. proposed autoencoder. (AE) networks that is an effective way to learn the low-dimensional codes of high-dimensional data.. Based on autoencoder, there are also have many excellent works to handle various tasks. Vincent et al. proposed a denoising autoencoder (DAE) which maked the learned representations robust to. partial corruption of the input data (Vincent et al.||2008). The denoising autoencoder which initialize. the deep architectures layer by layer is very similar with human visual system. Hinton et al. intro-. duced random 'dropout' to prevent the overfitting which improve many benchmark tasks and obtain. new records for speech and object recognition (Hinton et al.]2012b). Then, Vincent et al. proposed. stacked denoising autoencoders (SDAE) which based on stacking layers of stacked denoising au-. toencoders (Vincent et al.[2010). It is very useful to learn the higher level representations and work. well on natural images and handwritten digits. However, for the same reason, they also need a large. scale training set and a long training time. They have no advantages to handle the small and middle. scale applications.\nMoreover, in the field of feature learning models, dimensionality reduction plays a crucial role to handle the problems for compressing, visualizing high-dimensional data and avoiding the \"curse of dimensionality\" (van der Maaten et al.[2009,van der Maaten2007). Traditional dimensionality reduction mainly can be classified into three types: linear or nonlinear, like principal components analysis (PCA) (Jolliffe] 2002) and linearity preserving projection (LPP) (Niyogi]2004) are linear methods, stochastic neighbor embedding (SNE) (Hinton & Roweis 2002) is a nonlinear method supervised or unsupervised, such as marginal Fisher analysis (MFA) (Yan et al.]2007) Zhong et al. 2013) and linear discriminant analysis (LDA) (Fisher 1936) are supervised methods, PCA is an unsupervised method; local or global, like MFA and SNE are local methods, PCA is a global method Many feature learning models based on geometry theory provide different solutions to the problem of dimensionality reduction. Yan et al. proposed a general formulation about graph embedding framework can exploit new dimensionality reduction algorithms (Yan et al.J|2007). If only directly use some feature learning models to extract the good representation from original data, it often eventually couldn't get a good outcome. Considering this situation, we try to choose some excellent feature learning models and combine them with some deep learning algorithms. MFA is one special formulation of the graph embedding models based on this framework. It utilizes an intrinsic graph to characterize the intraclass compactness, and another penalty graph to characterize the interclass separability. Our motivation is to combine the advantage of MFA and deep architectures and propose a new initialization method for deep learning algorithms.\nThere are also have some excellent works about feature learning models combined the deep archi tectures (Yuan et al.) George et al.]2014] Ngiam et al.]2011).Yuan et al. proposed an improved multilayer learning model to solve the scene recognition task (Yuan et al.). This model overcome the limitation of shallow, one-layer representations for scene recognition. Trigeorgis et al proposed deep Semi-NMF, that is able to learn such hidden representations from different, unknown attributes of a given dataset (George et al.|2014). Ngiam proposed a deep architectures to learn features over multiple modalities (Ngiam et al.||2011). They showed that multi-modality feature learning is better than one modality and achieved good performance on video and audio datasets. However, in gen- eral, we can only obtain data from one modality. In this work, we combine the advantages of MFA and deep architectures, which based on stacked feature learning models (Zheng et al.|2014} 2015) then we use some deep learning tricks, like back propagation, denoising and dropout to fine tuning. the network. The advantage of this deep architecture is that we can learn the desirable weight matrix even if the training data is not large enough. And compared with traditional deep learning models and shallow feature learning models, our MDA achieved state-of-the-art results in most cases..\nFigure 1: The brief representation for MDA. Wr, represents the first layer random weight matrix W M FA, and W M FA3 represent the weight matrixes learned by MFA. The dotted red lines represen the dropout operation, the dotted red circle is the 'dropout' node, and the cross nodes are corrupted The denoising and dropout operation are completely random. For simplicity, we have omitted bia. terms."}, {"section_index": "3", "section_name": "3.1 A NOVEL FRAMEWORK OF DEEP ARCHITECTURES", "section_text": "The feature learning problem is generally formulated as follow. Given n data, {xf , . ,XI} E RD where D is the dimensionality of the data space, we seeks the compact representations of these data i.e., {yT, ..., yT} E d, where d is the dimensionality of the low dimensional embeddings.\nD-D- ... 1\nwhere Dj is the first higher dimensional space, the number of the node is twice or quadruple as the input layer. D, represents the dimensionality of the i-th intermediate representation space, and p is the total steps of mappings. Here, we can use different feature learning models for the learn- ing of each layer. As the feature learning models are optimized layer by layer, we can obtain the mapping functions between successive layers. The first hidden layer is random by Wr1, and the representation is,\nBased on our novel framework of deep architecture, we introduce Marginal Fisher Analysis (MFA to build MDA. Here, many traditional feature learning models, such as linear discriminant analysis\nSoftmax layer Output WMFA3 The corrupted node Input Wr W MFA2 The dropout node Dropout operation\nIn this section, we firstly introduce a novel framework of deep architectures, then we introduce marginal Fisher analysis (MFA) and the proposed marginal deep architectures (MDA) in detail. In addition, we also present some deep learning tricks that we used in the MDA model, including back. propagation, denoising and dropout.\nIn order to improve the accuracy of shallow feature learning models, we use stacked feature learning models to construct the deep architectures (Zheng et al.]2014) 2015), which is a general framework for different applications. In this case, the mapping of data from the original D-dimensional space. to the resulted d-dimensional space can be described as.\na1 =q(WTx+b\nb.\n(LDA), can be used as building blocks of MDA. Take LDA as an example. It assumes that the data of each class follow a Gaussian distribution. However, this assumption is not often satisfied in the real world. Without this assumption, LDA can not work well to separate the data with nonlineai. structure. Alternatively, MFA can solve this problem effectively. Hence, considering the learning. capability, we choose MFA as the build blocks of MDA in our work. MFA used the graph embedding. framework to set up an intrinsic graph that characterizes the intraclass compactness and another. penalty graph which characterizes the interclass separability. The marginal Fisher criterion is defined. aS\ntr(WTx(D- A)XTw) W* = argmin tr(WTX(Dp - Ap)XTW W\nIn order to combine the advantages of MFA and proposed deep architectures, we propose the. marginal deep architectures (or MDA). The MDA inherited from the proposed novel framework. of deep architectures is shown in Fig.1] As an input vector x E [0, 1[d, we first map it to high. er dimensional space by a random weight matrix Wr1. The representation of first hidden layer is computed as\nwhere, I(x) is the indicator function, I(x) = 1 if x is true, else I(x) = 0. yi is the label correspond ing to x;. Then the probability that x; is classified to j is,\nexp(wTan-1 p(yi = j[Xi,W) = exp(wfan-\nTaking derivatives, one can show that the gradient is\nN 1 VJ(w) = N i=1"}, {"section_index": "4", "section_name": "3.4 BACK PROPAGATION", "section_text": "In order to adjust the network, we use back propagation (Rumelhart et al.||1986) to compute partia derivative and stochastic gradient descent to update the weight matrixes and the bias terms. For eacl node i in output layer (n-th layer), we compute an error term as\nWMFA = WPCAW\na1 =s(WTx+b\ns(w] MEA\nN K exp(wFan-1) 1 I(yi=j)log w N l=1 exp(wTan-1 i=1 j=1\nIf the n-- 1 layer's neurons are more than the last layer, we can continue using MFA to map it. On the contrary, If the n - 1 layer's neurons are less than last layer, we can randomly initialize the weight matrix between this two layers. Next, in order to improve the MDA, we introduce back propagation. denoising and dropout operation.\nor =VJ(w)\nwhere, J(w) is the cost function computed from Equ|8 and J(w) computed from Equ|10 each node i in (n - 1)-th to second layer, the error term is computed as,."}, {"section_index": "5", "section_name": "3.5 DENOISING OPERATION", "section_text": "Vincent et al. proposed the denoising autoencoder to improve the robustness of autoencoder (Vincent et al.|2008). It's very similar with the regularization methods and avoids the \"overfitting\"' problem The basic idea is to corrupt partial input data by the desired proportion of v \"destruction\". for each input x, a fixed number vd of components are chosen at random, and their value is forced to O, while the others are left untouched. The initial input x to get a partially destroyed version x by means of a stochastic mapping.\nIn our MDA, we use this idea to improve the network, please refer to Fig.1|to find clear sight. For the input layer, the output of first hidden layer is represented as."}, {"section_index": "6", "section_name": "3.6 DROPOUT", "section_text": "As the same reason with denoising operation, dropout is a trick to prevent overfitting (Hinton et al. 2012b). When a large feedforward neural network is trained on a small training set, dropout per. formed well on test set. In order to prevent the complex co-adaptations on the training data, the basi idea of dropout is that each hidden node is randomly omitted from the network with a probability o. 3, so a hidden node can't rely on other hidden node. In another view, dropout is as a very efficien way of performing model averaging with neural networks. On test set, we train many separate net. works and then to apply each of these networks to the test data. Dropout operation can save the trair. time and then we average the predictions produced by a very large number of different networks. Fig.1 shows the dropout operation in our MDA."}, {"section_index": "7", "section_name": "4.1 DATESET DESCRIPTIONS", "section_text": "We evaluate the performance of MDA on five benchmark data sets. The detail of the data is showed. in Tab1 The USPsdata set is a handwritten digits image data set includes 7291 training samples and 2007 test samples from 10 classes with 256 dimensional features. This task is to recognize. the digits O to 9. The Isolet |data set is a collection of audio feature vectors of spoken letters. from the English alphabet. It includes 6238 training samples and 1559 test samples from 26 classes. with 614 dimensional features. The task is to identify which letter is spoken based on the recorded\n'http://www.gaussianprocess.org/gpml/data/ 2http://archive.ics.uci.edu/ml/datasets/ISOLET\nk+1 8k = Whrok+1 j=1\nThe back propagation procedure relies on computing the gradient of an objective function witl respect to the weights of a multilayer stacked modules. It starting from the output at the top and enc o the input at the bottom.\nx ~ qD(xx)\nh=s(WTx+b\na2 = s(WTx+ b1\nwhere, Wr, is the first layer random weight matrix, bi is the bias term of first layer. The \"de- noising' operation is established to a hypothetical additional specific criterion: robustness to partial destruction of the input, which means a good intermediate representation is learned from unknown distribution of its observed input. This operation helps for learning more stable structure and avoids. the overfitting problem in most cases..\nTable 1: Characteristics of datasets used in evaluation\n(and pre-processed) audio signal. Sensor |is a sensorless drive diagnosis data set includes 46816 training samples and 11693 test samples from 11 classes with 48 dimensional features. The features. are extracted from electric current drive signals. The task is to classify 11 different classes with different conditions of the drive which has intact and defective components. Covertype*|contains. geological and map-based data from four wilderness areas located in the Roosevelt National Forest. of northern Colorado. It includes 15120 training samples and 565892 test samples from 7 classes. with 54 dimensional features. The task is to identify forest cover type from cartographic variables. For the IbnSina||ancient Arabic document data set, we use 50 pages of the manuscript for training. (17543 training samples) and 10 pages for testing (3125 test samples). The data samples belong tc. 174 classes of subwords and are of dimensionality 200.\nIn addition, we also use a large scale dataset CIFAR-10|6[to test our MDA on large scale appli cations. The CIFAR-10 dataset consists of 60000 32 32 colour images in 10 classes, with 6000 images per class. There are 500o0 training images and 10000 test images. We also test our MDA on a specific task which use the CMU motion capture (CMU mocap) data set[] The CMU mocap data set includes three categories, namely, jumping, running and walking. We choose 49 video sequences from four subjects. For each sequence, the features are generated using Lawrences method&] with dimensionality 93 (Zhong et al.|2010). By reason of the few samples of CMU, we adopt 10-fold cross-validation in our experiments and use the average error rate and standard deviation to evaluate the performance.\nIn order to evaluate the performance of MDA, we compared our MDA with 5 deep learning models. include autoencoder (AE) (Hinton & Salakhutdinov 2006), stacked autoencoders, denoising au- toencoders (Vincent et al.]2008), stacked denoising autoencoders (Vincent et al.]2010) and stacked denoising autoencoders with dropout, 2 feature learning models, MFA (Zhong et al.]2013] Yan. et al.[2007) and PCA (Jolliffe[2002), PCA deep architecture base on our uniform framework and the classification accuracy on original space.."}, {"section_index": "8", "section_name": "4.2.2 EXPERIMENTAL SETTINGS", "section_text": "All of the deep learning methods have the same settings. The size of minibatch was set to 100, the learning rate and momentum were the default value 1 and O.5, the number of epoch was set to 400. the dropout rate and denoising rate v were set to O.1. For the AE and SAE, weight penalty of the L2 norm was set to 10-4. For MFA, the number of nearest neighbors for constructing the intrinsic graph was set to 5, while that for constructing the penalty graph was set to 20. The target spaces of MFA and PCA on different data sets were showed in Tab|1] For the USPS data set, The architecture was set to 256 - 512 - 256 - 128 - 64 - 32. For the Isolet data set ,the architecture was set to 617 - 1324 - 617 - 308. For the Sensor data set, the architecture was set to 48 - 96 - 48 - 24.\n3http://archive.ics.uci.edu/ml/datasets/Dataset+for+Sensorless+Drive+Diagnosist 4http://archive.ics.uci.edu/ml/datasets/Covertype http://www.causality.inf.ethz.ch/al_data/IBN_SINA.html 6http://www.cs.toronto.edu/ kriz/cifar.html 7http://http://mocap.cs.cmu.edu/ 8http://is6.cs.man.ac.uk/~neill/mocap/\nDAIASETSIAIISTICS dataset n train test d(target) USPS 9298 7291 2007 10 256(32) Isolet 7797 6238 1559 26 614(308) Sensor 58509 46816 11693 11 48(24) Covertype 581012 15120 565892 7 54(27) Ibnsina 20668 17543 3125 174 200(100) CIFAR-10 60000 50000 10000 10 3072(64) CMU 49 44 5 3 93(24)\nTable 2: The classification accuracy on different datasets. \"ORIG\" represents the results obtaine. in the original data space. 'PDA' represents the PCA deep architecture. 'MDA' represents the MFA deep architecture. The best reslut is highlighted with boldface..\nMethod ORIG PCA MFA AE SAE DAE(dropout) DAE SDAE PDA MDA USPS 0.8366 0.9402 0.9392 0.9402 0.9402 0.9581 0.9532 0.9452 0.9586 0.9601 Isolet 0.9467 0.9237 0.9269 0.9519 0.9506 0.9596 0.9519 0.9543 0.9584 0.9622 Sensor 0.8151 0.8042 0.8234 0.7995 0.8325 0.8178 0.7764 0.7870 0.8582 0.8558 Covertype 0.5576 0.5596 0.6057 0.7405 0.5576 0.7093 0.7397 0.7440 0.7458 0.7589 Ibnsina 0.8957 0.9190 0.9206 0.9363 0.9184 0.9402 0.9370 0.9261 0.9421 0.9491\nTable 3: The structures on 5 data sets. \"None\"' represents without second layer in MDA. \"Twice. means the second layer's nodes are as twice as the input layer. \"Quadruple'' represents the secon. layer's nodes are as quadruple as the input layer. \"Octuple'' represents the second layer's nodes ar as octuple as the input layer.\nFor the Covertype data set, we set the architecture to 54 - 216 - 108 - 54 - 27. Finally, for Ibnsir data set, the architecture was set to 200 400 - 200 - 100."}, {"section_index": "9", "section_name": "4.2.3 CLASSIFICATION RESULTS", "section_text": "The experimental results are shown in Tab.[2] We can see that our MDA achieves the best results on four dataset except the Sensor dataset, but MDA achieves the second best result on Sensor data set and only below the PDA. The PDA achieves the best result on Sensor data set and the second best results on other data sets. These results demonstrate that our uniform deep architectures achieve the good performance in most case. In addition, MDA not only outperform the traditional deep learning models, but also the shallow feature learning models. It shows that our deep architectures based on stacked some feature learning models can learn the better feature than shallow feature learning models."}, {"section_index": "10", "section_name": "4.3.1 DIFFERENT STRUCTURES FOR MDA", "section_text": "In order to evaluate the desired structures of MDA, we changed the node's number of the second layer. For USPS data set, we get rid of the second layer and the architecture was 256- 128- 64- 32 Then, we set the number of node of the second layer was as twice as the input layer, the architecture was 256 - 128 - 64 - 32. Next, the number of node was as quadruple as the input layer, the architecture was 256 - 1024 - 512 - 256 - 128 - 64 32. Finally, the node's number is as octuple as the input layer, the architecture was 256 - 2048 - 1024 - 512 256 - 128 - 64 - 32. The structures of other data sets are shown in Tab.3\nThe experimental results are shown in Tab.[4] When the the number of nodes of the second layer is as twice as the input layer, MDA achieved the minimum classification error on all data sets except. the Covertype data set. When the number of nodes of the second layer is as quadruple as the input.\nTable 4: The classification error with different structures on 5 data sets. The best results (minimur error) are highlighted with boldface.\nDataset None Twice Quadruple Octuple USPS 0.0463 0.0399 0.0433 0.0453 Isolet 0.0398 0.0378 0.0417 0.0430 Sensor 0.2172 0.1442 0.1559 0.7856 Covertype 0.3876 0.3878 0.2411 0.3806 Ibnsina 0.0643 0.0509 0.0614 0.0858\nDataset None Twice Quadruple Octuple USPS 256-128-64-32 256-512-256-128-64-32 256-1024-512-256-128-64-32 256-2048-1024-512-256-128-64-32 Isolet 617-308 617-1324-617-308 617-2648-1324-617-308 617-5296-2648-1324-617-308 Sensor 48-24 48-96-24 48-192-96-24 48-384-192-96-24 Covertype 54-27 54-108-27 54-216-108-54-27 54-432-216-108-54-27 Ibnsina 200-100 200-400-200-100 200-800-400-200-100 200-1600-800-400-200-100\nTable 5: The classification error on 5 datasets with different number of hidden layers\nThe number of hidden layers 1 2 3 4 5 6 7 USPS 0.05780 0.05032 0.05082 0.05182 0.03990 0.05730 0.05132 Isolet 0.03977 0.04169 0.03785 0.03849 0.05452 0.04234 0.04683 Covertype 0.27171 0.25601 0.24110 0.26731 0.27491 - Sensor 0.20457 0.15522 0.14420 0.17027 0.15567 - - Ibnsina 0.05696 0.05184 0.05088 0.06016 0.06720 - -\nTable 6: The classification error on. gray-CIFAR10 and CMU mocap data sets\n(a) gray-CIFAR10 (b) CMU mocap Method Error Method Error AE 0.5117 AE 0.3970 0.1343 SAE 0.5252 SAE 0.4106 0.1648 DAE(dropout) 0.5090 DAE(dropout) 0.3970 0.1343 DAE 0.5176 DAE 0.4061 0.1540 SDAE 0.5113 SDAE 0.3970 0.1343 PDA 0.5085 PDA 0.3591 0.0815 MDA 0.4947 MDA 0.3636 0.0958\nlayer, MDA get the worst result on Covertype data set. We can conclude that MDA can work well when the number of nodes of the second layer is as twice or quadruple as the input layer."}, {"section_index": "11", "section_name": "4.3.2 DIFFERENT NUMBER OF HIDDEN LAYERS FOR MDA", "section_text": "In order to evaluate how many hidden layers adapt to different datasets, we designed some experi ments which have different number of hidden layers. We used 1 ~ 7 hidden layers on USPS and. Isolet datasets and 1 ~ 5 hidden layers on Covertype, Sensor and Ibnsina datasets. The experimental. settings were same as previous experiments..\nTab. 5| shows the classification error on 5 datasets with different hidden layers. All the datasets achieved the best results when hidden layer's number is 3 except USPS dataset. The USPS dataset achieved the best result when hidden layer's number is 5. As 1 ~ 3 hidden layers, with the increase of the number of layers, the classification error is decreasing on all datasets. As small and middle scale applications, we don't need very deep architectures to handle it. As large scale applications. we can design deeper architectures to achieve better performance.\nThe previous section introduced the advantages of MDA on small and middle scale applications. Ir order to evaluate the universality of MDA, we chose a relatively large scale dataset CIFAR-10 to test. the performance of MDA.\nIn our experiments, we first transformed the color images to gray images in order to reduce the dimensionality of input. Then we took one sample as a 1024 dimensional vector which is the input of our MDA. So, we can call this data set gray-CIFAR10. The architecture was set to 1024 2048 - 1024-512-256-128-64, the minibatch's size was set to 100, the dropout ratio and denoising ratio were set to 0.1, the number of epoch was set to 400, the learning rate was set to 1, the momentum was set to 0.5. We compared our MDA with previous 6 methods.."}, {"section_index": "12", "section_name": "4.5 CLASSIFICATION ON CMU MOCAP DATA SET", "section_text": "CMU mocap data set is a very small dataset that only has 49 samples. Traditional deep learning methods didn't work well in these kind of applications. We test our MDA and PDA and comparec them with other 5 deep learning models. The architectures for all deep models (except the PDA were set to 93 - 186 - 93 - 47 24. Specially, since the CMU mocap data set only has 49 samples the PCA method only reduce the dimensionality to 49 at most, so the architecture of PDA was set tc\nTable.[6(a)|shows the classification error on gray-CIFAR10, we can see that PDA and MDA achieved he best results in these 7 methods. However, all of the methods on this framework didn't perform well because we use the gray operation."}, {"section_index": "13", "section_name": "5 CONCLUSION", "section_text": "In this paper, we proposed a novel deep learning framework that based on stacked some feature learning models to handle small or middle data sets. Then we introduce MFA in this framework. called MDA. The deep learning tricks like backpropagation, denoising and dropout operation are applied on MDA to improve its performance. Extensive experiments on 7 different type data sets demonstrate that MDA performs not only better than shallow feature learning models, but also state- of-the-art deep learning models on small and middle scale applications. The evaluation of MDA show that how to adjust the parameters make the MDA work well. For future work, we plan to try other feature learning models and explore the different structures for this novel deep learning model In addition, we plan to explore new deep architectures based on this framework to handle the large scale datasets."}, {"section_index": "14", "section_name": "REFERENCES", "section_text": "93 - 186 24. The denoising ratio and dropout ratio were set to 0.1 on DAE, DAE with dropout.. SDAE, SAE, PDA and MDA. The weight penalty on AE was set to 10-4. The learning rate was set to O.01, the momentum was set to 0.5 and the number of epoch is set to 600. The experiment was test on 10-fold cross validation. The experimental results are shown in Tab.[6(b).\nIn Tab.6(b)] our PDA and MDA achieved the best results in this dataset and have lower standard deviation than other deep learning models. It demonstrates that our PDA and MDA are more stable than other deep learning models. The traditional autoencoder, SDAE, DAE with dropout achieved the same result in this dataset and better than SAE and DAE.\nREFERENCES T.-H. Chan, K. Jia, S. Gao, J. Lu, Z. Zeng, and Y. Ma. PCANet: A simple deep learning baseline. for image classification? arXiv preprint arXiv:1404.3606, 2014. D. C. Ciresan, U. Meier, L. M. Gambardella, and J. Schmidhuber. Deep, big, simple neural nets for handwritten digit recognition. Neural computation, 22(12):3207-3220, 2010 R. Collobert and J. Weston. A unified architecture for natural language processing: Deep neural. networks with multitask learning. In ICML, pp. 160-167, 2008. J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep. convolutional activation feature for generic visual recognition. arXiv preprint arXiv:1310.1531, 2013. R. A. Fisher. The use of multiple measurements in taxonomic problems. Annals of eugenics, 7(2):. 179-188, 1936. T. George, B. Konstantinos, Z. Stefanos, and Bjorn W. Schuller. A Deep Semi-NMF Model for Learning Hidden Representations. In ICML, pp. 1692-1700, 2014. G. Hinton and R. Salakhutdinov. Reducing the Dimensionality of Data with Neural Networks.. Science, 313(5786):504-507, 2006. G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. N. guyen, T. N. Sainath, et al. Deep neural networks for acoustic modeling in speech recognition:. The shared views of four research groups. Signal Processing Magazine, IEEE, 29(6):82-97,. 2012a. G. E. Hinton and S. T. Roweis. Stochastic neighbor embedding. In NIPS, pp. 833-840, 2002.. G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. Improving neu-. ral networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012b\nY. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, pp. 436-444, 2015\nM. Long, Y. Cao, J. Wang, and M. Jordan. Learning Transferable Features with Deep Adaptation Networks. In ICML, pp. 97-105, 2015.\nX. Niyogi. Locality. Oreserying projections. In NIPS, volume 16, pp. 153, 2004\nY. 1. Boureau and Y.L. Cunand others. Sparse feature learning for deep belief networks. In NIPS. pp. 1185-1192, 2008. Q. V. Le, W. Y. Zou, S. Y. Yeung, and A. Y. Ng. Learning hierarchical invariant spatio-temporal. features for action recognition with independent subspace analysis. In CVPR, pp. 3361-3368, 2011.\nNgiam, A. Khosla, INall1.LC Mlutnnoaalaee pp. 689-696, 2011. X. Niyogi. Locality preserving projections. In NIPS, volume 16, pp. 153, 2004.. D. E Rumelhart, G. E. Hinton, and R. G. Williams. Learning representations by back-propagating errors. Nature, pp. 323-533, 1986. I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In. NIPS, pp. 3104-3112, 2014. L. J. van der Maaten. An introduction to dimensionality reduction using matlab. Report, 1201. (07-07):62, 2007. L. J. van der Maaten, E. O. Postma, and H. J. van den Herik. Dimensionality reduction: A compar ative review. The Journal of Machine Learning Research, 10(1-41):66-71, 2009. P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol. Extracting and composing robust features. with denoising autoencoders. In ICML, pp. 1096-1103, 2008. P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.-A. Manzagol. Stacked denoising autoencoder- s: Learning useful representations in a deep network with a local denoising criterion. The Journal of Machine Learning Research, 11:3371-3408, 2010. H. Y. Xiong, B. Alipanahi, L. J. Lee, H. Bretschneider, D. Merico, R. K. Yuen, Y. Hua, S. Guer- oussov, H. S. Najafabadi, T. R. Hughes, et al. The human splicing code reveals new insights into the genetic determinants of disease. Science, 347(6218):1254806, 2015. S. Yan, D. Xu, B. Zhang, H.-J. Zhang, Q. Yang, and S. Lin. Graph Embedding and Extensions: A General Framework for Dimensionality Reduction. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 29(1):40-51, 2007. Y. Yuan, L. Mou, and X. Lu. Scene recognition by manifold regularized deep learning architecture Neural Networks and Learning System, IEEE Transactions on.. Y. Zheng, G. Zhong, J. Liu, X. Cai, and J. Dong. Visual Texture Perception with Feature Learning. Models and Deep Architectures. In CCPR, pp. 401-410. 2014. Y. Zheng, Y. Cai, G. Zhong, Y. Chherawala, Y. Shi, and J. Dong. Stretching Deep Architectures for Text Recognition. In ICDAR, pp. 236-240, 2015. G. Zhong, W.-J. Li, D.-Y. Yeung, X. Hou, and C.-L. Liu. Gaussian Process Latent Random Field In AAAI, 2010. G. Zhong, Y. Chherawala, and M. Cheriet. An Empirical Evaluation of Supervised Dimensionality Reduction for Recognition. In ICDAR, pp. 1315-1319, 2013.\nZheng, Y. Ca1, O Cnnerawala. g Deep Architectures for Text Recognition. In ICDAR, pp. 236-240, 2015. G. Zhong, W.-J. Li, D.-Y. Yeung, X. Hou, and C.-L. Liu. Gaussian Process Latent Random Field. In AAAI, 2010. G. Zhong, Y. Chherawala, and M. Cheriet. An Empirical Evaluation of Supervised Dimensionality Reduction for Recognition. In ICDAR, pp. 1315-1319, 2013. B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. Learning deep features for scene recog. nition using places database. In NIPS, pp. 487-495, 2014."}] |
Hkz6aNqle | [{"section_index": "0", "section_name": "DEEP ERROR-CORRECTING OUTPUT CODES", "section_text": "Guoqiang Zhong\nDepartment of Computer Science and Technology Ocean University of China\nDepartment of Computer Science and Technology Ocean University of China\nDepartment of Computer Science and Technology Ocean University of China.\nExisting deep networks are generally initialized with unsupervised methods, such. as random assignments and greedy layerwise pre-training. This may result in the whole training process (initialization/pre-training + fine-tuning) to be very time-. consuming. In this paper, we combine the ideas of ensemble learning and deep. learning, and present a novel deep learning framework called deep error-correcting. output codes (DeepECOC). DeepECOC are composed of multiple layers of the. ECOC module, which combines multiple binary classifiers for feature learning. Here, the weights learned for the binary classifiers can be considered as weights between two successive layers, while the outputs of the combined binary classi- fiers as the outputs of a hidden layer. On the one hand, the ECOC modules can be learned using given supervisory information, and on the other hand, based on the. ternary coding design, the weights can be learned only using part of the training data. Hence, the supervised pre-training of DeepECOC is in general very effective and efficient. We have conducted extensive experiments to compare DeepECOC with traditional ECOC, feature learning and deep learning algorithms on several. benchmark data sets. The results demonstrate that DeepECOC perform not only better than traditional ECOC and feature learning algorithms, but also state-of-. the-art deep learning models in most cases.."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Error correcting output codes (ECOC) are an ensemble learning framework to address multi-class classification problems (Dietterich & Bakirif|1995). The work by (Zhong & Liu]2013) shows thai the ECOC methods can also be used for feature learning, in either a linear or a nonlinear manner However, although sophisticated coding and decoding strategies are applied (Escalera et al.]2010 Zhong et al.] 2012] Zhong & Cheriet2013), the learnability of ECOC is limited by its single layer structure. Therefore, to exploit the advantages of the ECOC framework, such as supervisec ensemble learning and effective coding design, it's necessary to combine its ideas with that of deep learning.\nIn recent years, many deep learning models have been proposed to handle various challenging prob lems. Meantime, desirable performances in many domains have been achieved, such as image clas sification and detection, document analysis and recognition, natural language processing, and videc analysis (Hinton & Salakhutdinov2006;|Krizhevsky et al.2012)Szegedy et al.|2014) Simonyan & Zisserman 2014;|Zhang et al.f|2015}Wang & Ji]2015f|Hong et al. 2015). Among others, (Hinton &\nYuchen Zheng\nDepartment of Computer Science and Technology Ocean University of China."}, {"section_index": "2", "section_name": "Mengqi Li", "section_text": "Department of International Trade and Economy Ocean University of China.\nenri9615@outlook.com"}, {"section_index": "3", "section_name": "ABSTRACT", "section_text": "To overcome the limitations of both traditional ECOC methods and deep learning models, and mean while, take advantages of both of them, in this paper, we propose a novel deep learning model callec deep error-correcting output codes (DeepECOC). DeepECOC are composed of multiple stacked E COC modules, each of which combines multiple binary classifiers for feature learning. Here, the weights learned for the binary classifiers can be considered as weights between two successive lay ers, while the probabilistic outputs of the combined binary classifiers as the outputs of a hidder layer or new representations of data. On the one hand, the ECOC modules can be learned layer by layer using the given supervisory information, and on the other hand, based on the ternary coding design, some classes of data are automatically neglected when training the binary classifiers, such that the weights are learned only using part of the training data. Hence, the supervised pre-training of DeepECOC is in general very effective and efficient. We have compared DeepECOC with tra ditional ECOC, feature learning and deep learning algorithms to demonstrate the effectiveness and superiority of DeepECOC. The results are reported in Section|4\nThe rest of this paper is organized as follows: In Section[2] we give a brief overview to related work In Section[3] we present the proposed model, DeepECOC, in detail. The experimental results are reported in Section|4] while Section[5|concludes this paper with remarks and future work.\nTraditional ECOC framework has two steps: coding and decoding. In the coding step, an E-. COC matrix is defined or learned from data, and the binary classifiers are trained based on the ECOC coding; in the decoding step, the class label is given to a test sample based on a similarity. measure between codewords and outputs of the binary classifiers. The widely used coding strate- gies include one-versus-all (OneVsAll) (Nilsson1965), one-versus-one (OneVsOne) (Hastie et al. 1998), discriminant ECOC (DECOC) (Pujol et al.]2006), ECOC optimizing node embedding (E) COCONE) (Escalera et al.]2006), dense and sparse coding (Escalera et al.]2009] Allwein et al. 2001), and so on. Among them, the OneVsAll, OneVsOne, dense and sparse coding strategies are. problem-independent, whilst the DECOC and ECOCONE are problem-dependent. Generally, the. length of the codeword by the OneVsAll, OneVsOne, DECOC and ECOCONE coding designs is related to the number of classes, but that by the dense and sparse coding design is relatively flex- ible. In this work, we design the structure of DeepECOC based on the properties of each coding. strategy. The commonly used binary ECOC decoding strategies are the Hamming decoding (Nils son]1965) and Euclidean decoding (Hastie et al.][1998). For ternary ECOC decoding strategies, the attenuated Euclidean decoding (Pujol et al.|2008), loss-based decoding (Allwein et al.|2001), and probabilistic-based decoding (Passerini et al.[|2004) are widely used. Currently, the state-of-the-art. ternary ECOC decoding strategies are the discrete pessimistic beta density distribution decoding and loss-weighted decoding (Escalera et al.]2010). In this work, for the simplicity of back propagation,. we directly add a Softmax layer at the top of DeepECOC for the decoding. Note that, although many sophisticated coding and decoding strategies have been proposed in recent years (Escalera et al.[2010]Zhong et al.]2012] Zhong & Cheriet2013), the learnability of ECOC is limited by its single-layer structure. To further exploit the advantages of ECOC, such as supervised ensemble combine its ideas with that of deen learning.\nSalakhutdinov2o06) presents the ground-breaking deep autoencoder that learns the weight matrices. by pre-training the stacked restricted Boltzmann machines (RBMs) and fine-tuning the weights using. gradient descent. It delivers much better representations of data than shallow feature learning algo. rithms, such as principal components analysis (PCA) (Jolliffe| 1986) and latent semantic analysis. (LSA) (Deerwester et al.]1990). In order to boost the traditional autoencoder and prevent the \"over- fitting\" problem, (Vincent et al.2008) introduces the denosing autoencoder that corrupted the data. with a random noise. Recently, most of the research focuses on deep convolutional neural networks. (CNNs) and recurrent neural networks (RNNs), which greatly improves the state-of-the-art in the ar-. eas of object recognition, unsegmented handwriting recognition and speech recognition (Krizhevsky et al.][2012][Graves et al.]2009|[Sak et al.[[2014). However, existing deep networks are generally ini tialized with unsupervised methods, such as random assignments and greedy layerwise pre-training. In the case of random initialization, to obtain good results, many training data and a long training. time are generally used; while in the case of greedy layerwise pre-training, as the whole training. data set needs to be used, the pre-training process is very time-consuming and difficult to find a. stable solution.\nFigure 1: Two coding matrices encoded with the one-versus-all (binary case) and one-versus-one (ternary case) coding strategies\nIn the literature of deep learning, there is some work that attempts to construct a deep architecture with multiple feature learning methods (Hinton & Salakhutdinov|2006;Trigeorgis et al.|2014)Yuar et al.]2015]Zheng et al.]2015] 2014). For instance, deep autoencoder is built up by RBMs (Hinton & Salakhutdinov2006), and deep semi-NMF combines multiple steps of matrix factorization (Tri georgis et al.2014). Similarly, deep CNNs and RNNs can also be considered as deep models that learn the new representations of data layer by layer [Krizhevsky et al.2012]Graves et al.2009] Sak et al.|2014). The success of these existing models demonstrate that deep networks are beneficial tc the representation learning tasks, especially for the large scale applications. However, as discussed in the previous section, existing deep learning models are generally initialized with unsupervisec methods, such as random assignments and greedy layerwise pre-training, which result in a long training time of the deep models. In this work, we propose the DeepECOC model, which is based on the stacked ECOC modules. When pre-training DeepECOC, the ECOC modules can be learned with the available supervisory information. Intuitively, as this manner of supervised pre-training has deterministic objective, the learned value of the parameters will be very close to the best local minimum on the solution manifold. Experimental results shown in Section4 also demonstrate this fact."}, {"section_index": "4", "section_name": "DEEP ERROR-CORRECTING OUTPUT CODES (DEEPECOC)", "section_text": "In this section, we first introduce the traditional ECOC framework, which is the important buildir block of DeepECOC. Then we present the learning procedures of DeepECOC in detail."}, {"section_index": "5", "section_name": "3.1 THE ECOC FRAMEWORK", "section_text": "Error correcting output codes (ECOC), which combine multiple binary classifiers to solve multi. class classification problems, are an ensemble learning framework. The ECOC methods in gen. eral consist of two steps: coding and decoding. In the coding step, the ECOC coding matrix. M E {-1, 1}CL (binary case) or M E {-1, 0, 1}CL (ternary case) is first defined or learned from the training data, where each row of M is the codeword of a class, each column corresponds. to a dichotomizer (binary classifier), L is the length of the codewords (the number of binary classi fiers), C is the number of classes, symbol '1' indicates positive class, '-1' indicates negative class,. and 'O' indicates that a particular class is not considered by a given classifier. Then, the binary. classifiers (dichotomizers) are trained according to the partition of the classes in the columns of M. Fig.1shows two coding matrices encoded with the one-versus-all (binary case) and one-versus-one. (ternary case) coding strategies. The matrix is coded using several dichotomizers for a 4-class prob. lem with respective codewords {y1, . .. , y4}. The white girds are coded by 1 (considered as positive. class by the respective dichotomizer h), the dark girds by -1 (considered as the negative class),. and the gray girds by O (classes that are not considered by the respective dichotomizer h). In the. decoding step, the test data are predicted based on an adopted decoding strategy and the outputs of. the binary classifiers.\nIn order to take the probabilistic outputs of the base classifiers as new representations of data, we adopt linear support vector machines (linear SVMs) as the binary classifiers (dichotomizers), which solve a quadratic programming problem\nhh h3 h4 h5 h6 y1 y 2 y 3 y 4 (b) one-versus-one\nN min J(w) = + C w,b,i i=1 s.t. yifxi)1-,i>0,i=1,...,\nwhere w and b are the coefficients and bias of the binary classifier, yi E {+1, -1}, &t's are the slack. variables, and N is the number of the training data. The discriminant function can be expressed as\nN W QiYiXi; i=1\nNsv 1 Yi-w'x Nsy XiESV,i=1\nwhere k(x,xj) = xf'x, is the linear kernel function, X is a constant number, and a =. {Q1,..., Qv} is the vector of Lagrange multipliers. Replacing the linear kernel function with a nonlinear kernel, such as the Gaussian kernel.\nk(xi,x;) =exp(-o-1|x;-x|l2)\nwe can learn a nonlinear SVM, where is the parameter for the Gaussian kernel function. The discriminant function of SVMs with a nonlinear kernel can be written as\nApplying a decoding strategy on the outputs of the binary classifiers, the ECOC framework can be. used for multi-class learning, while applying the sigmoid function on the values of the discriminant function, ECOC can be used for feature learning (Zhong & Liu]2013). This is also the foundation. of the DeepECOC model"}, {"section_index": "6", "section_name": "3.2 DEEPECOC", "section_text": "To combine the advantages of ECOC and deep learning algorithms, we build the DeepECOC archi tecture as follows\nW 1 Wn-1 qD W softmax x x hr - b1 b2 bn-1\nwhere the first step makes the clean input x E [0, 1]d partially destroyed by means of a stochastic mapping x ~ qp(x x). In the corrupting process, we set a parameter called denoising rate v For each input x, a fixed number vd of components are chosen at random, and their value is forcec to O, while the others are left untouched. This operation makes the model more robust and preven the overfitting problem in most cases (Vincent et al.]2008). Subsequently, the \"corrupted\" data are taken as inputs for the DeepECOC model. W1 and b1 are the weight matrix and bias vector learned from the first ECOC module. The output of the first hidden layer is denoted as\nf(x)=wx+b.\nwhere a;'s are the non-negative Lagrange multipliers, Nsy is the number of support vectors and SV is the set of support vectors. The dual form of Problem (1) can be written as.\nN N 1 max Qi QjQj YiYjXf Xj 2 Q i=1 i,j=1 N N 1 Qi Q;Qj YiYjk(Xi,Xj) 2 i=1 i,j=1 S.t. 0<Q,i=1,...,N, N QiYi = 0, i=1\nN f(x) =aiyik(xi,x)+b. i=1\nh1 = s(Wfx+ b1)\nhk = s(Wfhk-1 + bk)\nHere. h. can be viewed as an activation output and a new re presentation of the input datum x\nFor example, if we adopt the OneVsAll coding strategy for one layer of the ECOC module. we first define the coding matrix MCC, where C is the number of classes. Then, we car. train C SVM classifiers to obtain the weight matrix W = {w1,..., wi,..., wc} and the bia. b = {b1,..., bi, ..., bc}. Next, we calculate the output of the first layer by using Eq. (9. Subse. quently, we repeat this process layer by layer to build the DeepECOC model. It's obvious that, if w. adopt different coding strategies, we can get different kinds of DeepECOC architectures.\nFor the last layer of DeepECOC, we employ the softmax regression for the multi-class learning. Its cost function is defined as\nTaking derivatives, one can show that the gradient of J(w) with respect to w is\nN 1 [xi(I(yi=j)-p(yi=j|xi,w)] VJ(w) = N i=1\nAfter the pre-training step, we use back propagation (Rumelhart et al.|1988) to fine tune the whole architecture. Moreover, we also employ a technique called \"dropout' for regularization (Hinton et al.| 2012). When a large feedforward neural network is trained on a small training set, dropout generally performs well on the test set. The basic idea of dropout is that each hidden node is ran- domly omitted from the network with a probability of 3. In another view, dropout is a very efficient way to perform model averaging with neural networks. Through these processes, we finally obtain the DeepECOC model, which is robust and easy to be applied to multi-class classification tasks.\nNote that, compared to existing deep learning algorithms, DeepECOC have some important ad- vantages. Firstly, unlike previous deep learning algorithms, DeepECOC are built with the ECOC modules and pre-trained in a supervised learning fashion. Secondly, if we adopt ternary coding s- trategies, due to the natural merit of ECOC, the weights can be learned using only part of the training data. Thirdly, in contrast to the learning of the weight matrices in previous deep learning models the binary classifiers in each ECOC module can be learned in parallel, which may greatly speed up the learning of DeepECOC."}, {"section_index": "7", "section_name": "4 EXPERIMENTS", "section_text": "where s(.) is the sigmoid activation function s(x) = . From the second layer to the (n - 1) th layer, we use the stacked ECOC modules to learn the weight matrices and biases, which can be considered as weights between two successive layers of a deep network. Similarly, we use the output of the (k - 1)-th layer as the input of the k-th layer,\nN K 1 exp(wThn-1) >>`I(yi=j) log N exp(w) i=1 j=1\nwhere I(x) is the indicator function, I(x) = 1 if x is true, else I(x) = 0. yi is the label corresponding to x,. It's easy to compute the probability that x; is classified to class j,\nexp(wThn-1) p(yi=jxi,w)= t=1 exp(wfhn-1)\nTo evaluate the effectiveness of the proposed method, DeepECOC, we conducted 4 parts of experi ments. In the first part, we compared DeepECOC with some deep learning models and single-layer ECOC approaches on 16 data sets from the UCI machine learning repository[] In the second part, we compared DeepECOC with traditional feature learning models, some deep learning models and single-layer ECOC approaches on the USPS handwritten digits] and tested DeepECOC with dif- ferent number of hidden layers. In the third part, we used the MNIST handwritten digits|to further.\nTable 1: Details of the UCI data sets (T: training samples; A: attributes; C: classes)\ndemonstrate the effectiveness of DeepECOC for handwritten digits recognition. Finally, the CIFAR. 10 data set|was used to demonstrate the effectiveness of DeepECOC on image classification tasks For all the data sets, the features were normalized within [0, 1]. In the following, we report the. experimental results in detail.\nThe detail of the UCI data sets are shown in Table1 In these experiments, we compared DeepECOC with autoencoder (AE) (Hinton & Salakhutdinov 2006), denoising autoencoder (DAE) (Vincent et al.[2008) and single-layer ECOC approaches (Single) (Escalera et al.[2010). We built DeepECOC with the ECOC optimizing node embedding (ECOCONE) coding method (Escalera et al.]2006) Here, since we initialized ECOCONE with 3 different coding methods, i.e. one-versus-one, one-. versus-all and DECOC. DeepECOC had 3 variants. In addition. the state-of-the-art linear loss-\nProblem # of T # of A # of C Problem # of T # of A # of C Dermatology 366 34 6 Yeast 1484 8 10 Iris 150 4 3 Satimage 6435 36 7 Ecoli 336 8 8 Letter 20000 16 26 Wine 178 13 3 Pendigits 10992 16 10 Glass 214 9 7 Segmentation 2310 19 7 Thyroid 215 5 3 Optdigits 5620 64 10 Vowel 990 10 11 Shuttle 14500 9 7 Balance 625 4 3 Vehicle 846 18 4\nTable 2: Classification accuracy and standard deviation obtained by DeepECOC and the compared approaches on 16 UCI data sets. Here, DeepECOC(1)~ DeepECOC(3) are 3 variant of DeepECOC with the ECOCONE coding design initialized by one-versus-one, one-versus-all and DECOC re- spectively. The best results are highlighted in boldface..\nProblem AE DAE DeepECOC DeepECOC DeepECOC Single (1) (2) (3) 0.9429 0.9674 0.9702 0.9779 0.9747 Dermatology 0.9513 0.0671 0.0312 0.0354 0.0208 0.0318 0.9600 0.9333 0.9600 0.9267 0.9533 Iris 0.9600 0.0562 0.0889 0.0535 0.1109 0.0383 0.7725 0.8000 0.8529 0.8824 0.9118 Ecoli 0.8147 0.0608 0.0362 0.0403 0.0626 0.0636 0.9765 0.9563 0.9875 0.9813 0.9688 Wine 0.9605 0.0264 0.0422 0.0264 0.0302 0.0329 0.6669 0.6669 0.7895 0.7368 0.7562 Glass 0.6762 0.1032 0.0715 0.0788 0.1140 0.0879 0.9513 0.9599 0.9656 0.9703 0.9608 Thyroid 0.9210 0.0614 0.0567 0.0513 0.0540 0.0518 0.6985 0.7101 0.7475 0.6010 0.6863 Vowel 0.7177 0.0745 0.0756 0.0901 0.0627 0.0788 0.8036 0.8268 0.9137 0.8333 0.9167 Balance 0.8222 0.0320 0.0548 0.0412 0.0318 0.0312 0.5641 0.5891 0.5959 0.5494 0.5697 Yeast 0.5217 0.0346 0.0272 0.0599 0.0434 0.0462 0.8675 0.8897 0.8961 0.8360 0.9077 Satimage 0.8537 0.0528 0.0304 0.0480 0.0390 0.0555 0.9234 0.9381 0.9532 0.9247 0.9501 Letter 0.9192 0.0547 0.0641 0.0341 0.0352 0.0563 0.9831 0.9886 0.9908 0.9866 0.9899 Pendigits 0.9801 0.0123 0.0034 0.0031 0.0107 0.0075 0.9584 0.9596 0.9711 0.9584 0.9711 Segmentation 0.9701 0.0317 0.0211 0.0286 0.0163 0.0233 0.9785 0.9856 0.9867 0.9848 0.9911 Optdigits 0.9982 0.0101 0.0088 0.0096 0.0123 0.0091 0.9953 0.9976 0.9988 0.9983 0.9993 Shuttle 0.9988 0.0012 0.0014 0.0021 0.0018 0.0010 0.6987 0.7348 0.7561 0.6908 0.7195 Vehicle 0.7315 0.0521 0.0454 0.0480 0.04321 0.0148 Mean rank 4.0938 4.8750 3.9375 1.7500 3.9375 2.4063\nweighted (LLW) decoding strategy was used for ECOCONE. Finally, a structure with 3 hidden layers was adopted for DeepECOC, which had 0.1 denoising rate and 0.1 dropout rate:.\nq D W W W softmax x x V b1 b2\nFor the fine-tuning process, we used the stochastic gradient descent algorithm. The learning rate and epoches from different data sets are described in Table[3] The autoencoder and denoising au- toencoder's architectures are as same as DeepECOC with ECOCONE initialized by one-versus-one. For single-layer ECOC approaches, we chose the best results shown in (Escalera et al.] 2010) as our compared results. For all DeepECOC models, we used support vector machines (SVMs) with RBF kernel function as base classifiers. The parameters of SVMs were set to default (Chang & Lin 2011).\nTable[2|shows the average classification accuracy and standard deviation on 16 UCI data sets. Excep1 on the OptDigits data set, DeepECOC achieved the best results compared with autoencoder, denois ing autoencoder and single-layer ECOC approaches. In fact, on the OptDigits data set, DeepECOC. achieved comparative result with single-layer ECOC approaches. Among others, DeepECOC with. ECOCONE (initialized by one-versus-one) coding strategy obtained the best results on 9 data sets. while DeepECOC with ECOCONE (initialized by DECOC) coding strategy obtained the best results. on 5 data sets. From the mean rank values, we can see that DeepECOC with ECOCONE (initialized by one-versus-one and DECOC) strategy far surpass other compared methods..\nTable 3: Details of the learning and epoch on the UCI data sets rate"}, {"section_index": "8", "section_name": "4.2 CLASSIFICATION ON THE USPS DATA SET", "section_text": "The USPS handwritten digits data set includes 7291 training samples and 2007 test samples from 10 classes. The size of the images is 16 16 = 256. Our experiments on this data set were divided into 2 parts. Firstly, we compared DeepECOC with two traditional feature learning models (principal components analysis (PCA) (Jolliffe] 2002) and marginal Fisher analysis (MFA) (Yan et al.2007)) autoencoder (AE), denoising autoencoder (DAE), LeNet (LeCun et al.]1998), PCANet (Chan et al. 2015) and single-layer ECOC approaches. Here, PCA is an unsupervised method, MFA is a super- vised method. For MFA, the number of nearest neighbors for constructing the intrinsic graph was set to 5, while that for constructing the penalty graph was set to 15. For DeepECOC, we also used 3 coding design methods in this experiment. We used batch gradient descent for the fine-tuning process, the batch size was set to 100, the learning rate was set to 1, the number of epoch was set to 40000, the denoising rate, and dropout rate were set to O.1. We also used SVMs with RBF kernel and default parameters as base classifiers. For single-layer ECOC approaches, we adopted ECOCONE (initialized by one-versus-one) as coding design method and linear loss-weighted (LLW) decoding strategy. For the LeNet model, we used 2 convolutional layers, two pooling layers and two fully connected layers. The kernel size of the convolutional layers and pooling layers was set to 2 2, the stride was set to 1, the number of nodes of the first layer was set to 200, the epoch was set to 8000, the initial learning rate was set to O.001, learning rate policy was set to \"inv\", and the momentum was set to O.9. For the PCANet model, we used two PCA-filter stages, one binary hashing stage and one blockwise histograms. The filter size, the number of filters, and the block size were set to k1 = k2 = 3, L1 = L2 = 4, and 7 7, respectively. The experimental results are shown in Fig.2(a)\nFrom Fig.2(a). we can see that DeepECOC with ECOCONE (initialized by one-versus-one) coding strategy achieved the best result than other methods include traditional feature learning models existing deep learning methods and single-layer ECOC approaches..\nProblem n Epoch Problem n Epoch Dermatology 0.1 2000 Yeast 0.01 4000 Iris 0.1 400 Satimage 0.01 4000 Ecoli 0.1 2000 Letter 0.01 8000 Wine 0.1 2000 Pendigits 0.01 2000 Glass 0.01 4000 Segmentation 0.01 8000 Thyroid 0.1 800 Optdigits 0.01 2000 Vowel 0.1 4000 Shuttle 0.1 2000 Balance 0.1 4000 Vehicle 0.1 4000\nIn the second part, we evaluated DeepECOC with different number of hidden layers. We used 2 to 6 hidden layers in our experiments. The parameter settings were as same as the first part. Fig.2(b)\n0.98 1 LeNet PCANet 0.98 AE 0.97 0.96 DAE PCA 0.96 T 0.94 MFA ACennrev Ccnra Single DeepECOC(1) 0.95 0.92 DeepECOC(2) iaaeoon DeepECOC(3) CCaassasoon 0.9 0.94 0.88 laasi 0.93 0.86 0.92 0.84 0.91 0.82 0.8 0.9 Methods 2 3 4 5 6 Number of Hidden Layers (a) (b)\nFigure 2: (a) Classification accuracy obtained on the USPS data set. Here, DeepECOC(1)~ Deep ECOC(3) are 3 variant of DeepECOC with the ECOCONE coding design initialized by one-versus one, one-versus-all and DECOC respectively. (b) Classification accuracy with different numbers of hidden layers on the USPS data set.\nAE LeNet DAE PCANet DeepECOC(1) AE DeepECOC(2) DAE DeepECOC3) Sparse Single Dense 0.95 0.95 Single 0.9 0.9 0.85 0.85 Methods Methods (a) 784-Z1Z2-Z310 (b) 784 500 - 500 - 2000 - 10\nFigure 3: Classification accuracy obtained on the MNIST data set for two architectures"}, {"section_index": "9", "section_name": "4.3 CLASSIFICATION ON THE MNIST DATA SET", "section_text": "shows the experimental results. We can see that DeepECOC obtained the best result when using 3. hidden layers. When the number of hidden layers is less than 3, the effectiveness of DeepECOC. increases with the increasing of the number of hidden layers. Along with the number of hidden layers continues to grow, the effectiveness of DeepECOC decreases..\nMNIST handwritten digits data set has a training set of 60,o00 examples, and a test set of 10,000 examples with 784 dimensional features. We designed 2 architectures for autoencoder, denoising autoencoder and DeepECOC. The first architecture was 784 - Z1 Z2 - Z3 10, where Z; was the number of hidden neurons designed based on some ECOC coding strategies. We designed this architecture because we wanted to make autoencoder and denoising autoencoder had the same structure with DeepECOC. The second architecture is 784- 500- 500- 2000-10. This architecture was used in (Hinton & Salakhutdinov2006). In order to make DeepECOC adapt to this structure, we used the dense and sparse coding design methods that can control the codeword length. Note that, the dense and sparse coding design methods are totally random and data-independent. The denoising rate and dropout rate were set to O.1, the batch size was set to 100, the learning rate was set to O.01, and the number of epoch was set to 80o0o. For LeNet model, we adopted the parameters as same as (LeCun et al.]1998). For PCANet model, we used two PCA-filter stages, one binary hashing stage and one blockwise histograms. In the PCANet, the filter size, the number of filters, and the block size were set to k1 = k2 = 8, L1 = L2 = 7, and 7 7, respectively.\nig.3(a)|and Fig.3(b)[show the experimental results on 2 architectures. We can see that DeepECOC are comparative with existing deep learning methods on the second architecture and outperform\nTable 4: Classification accuracy obtained on the LBP-CIFAR10 data set. The best result for each scenario is highlighted in bold face..\nthem on the first architecture. In addition, DeepECOC with both two architectures outperform t single-layer ECOC approaches."}, {"section_index": "10", "section_name": "5 CONCLUSION", "section_text": "In this paper, we propose a novel deep learning model, called deep error correcting output codes. (DeepECOC). DeepECOC extend traditional ECOC algorithms to a deep architecture fashion, anc meanwhile, brings new elements to the deep learning area, such as supervised initialization, anc. automatic neglecting of part of the data during network training. Extensive experiments on 16 data sets from the UCI machine learning repository, the USPS and MNIST handwritten digits anc. the CIFAR-10 data set demonstrate the superiority of DeepECOC over traditional ECOC, feature. learning and deep learning methods. In future work, we will further exploit the learnability o. DeepECOC on large scale applications.\nThe CIFAR-10 dataset is a relative large scale data set which consists of 60000 32 32 colour images. in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. For the purpose of reducing computational cost, we attempted to extract features of the data using. an efficient local binary patterns algorithm. As a result, the representations with dimensionality 36. and 256 were adopted and the data were normalized to [0, 1] as well, called LBP-CIFAR10 (36. and LBP-CIFAR10 (256). We also used 3 hidden layers for all deep learning methods. The learning. rate was set to 0.1, and the epoch was set to 40o0. For the LeNet model, we used 2 convolutional. ayers and two fully connected layers without pooling layers. The kernel size was set to 2 2, the. stride was set to 1, the number of node of the first fully connected layer was set to 64, the epoch was set to 400o, the initial learning rate was set to O.01, learning rate policy was set to \"inv\", anc. the momentum was set to 0.9. For the PCANet model, we used two PCA-filter stages, one binary. hashing stage and one blockwise histograms. In the PCANet, the filter size, the number of filters.. and the block size were set to k1 = k2 = 3, L1 = L2 = 4, and 7 7, respectively. The classification. accuracy are reported in Table4\nFrom Table4] we can easy to see that DeepECOC achieved the best results. Moreover, DeepECOC with ECOCONE (initialized by one-versus-one) coding strategy achieved the better results than autoencoder and denoising autoencoder, LeNet and PCANet. Hence, we can conclude that, Deep ECOC are a general model to handle different real world applications and achieves desirable results in most cases.\nS. Escalera, O. Pujol, and P. Radeva. Ecoc-one: A novel coding and decoding strategy. In ICPR, volume 3, pp. 578-581, 2006. S. Escalera, O. Pujol, and P. Radeva. Separability of ternary codes for sparse designs of error- correcting output codes. Pattern Recognition Letters, 30(3):285-297, 2009. S. Escalera, O. Pujol, and P. Radeva. On the decoding process in ternary error-correcting output codes. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 32(1):120-134, 2010. A. Graves, M. Liwicki, S. Fernandez, R. Bertolami, H. Bunke, and J. Schmidhuber. A Novel Con nectionist System for Unconstrained Handwriting Recognition. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 31(5):855-868, 2009. T. Hastie, R. Tibshirani, et al. Classification by pairwise coupling. The annals of statistics, 26(2): 451-471, 1998. G. Hinton and R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504-507, 2006. G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012. S. Hong, T. You, S. Kwak, and B. Han. Online tracking by learning discriminative saliency map with convolutional neural network. In ICML, 2015. I. Jolliffe. Principal Component Analysis. New York: Springer-Verlag, 1986. I. Jolliffe. Principal component analysis. Wiley Online Library, 2002. A. Krizhevsky, I. Sutskever, and G. Hinton. ImageNet classification with deep convolutional neural networks. In NIPS, pp. 1106-1114, 2012.\nI. Jolliffe. Principal comp onent analysis. Wiley Online Library, 2002\nN. J. Nilsson. Learning Machines. McGraw-Hill, 1965\nS. Escalera, O. Pujol, and P. Radeva. Ecoc-one: A novel coding and decoding strategy. In ICPR,. volume 3, pp. 578-581, 2006. S. Escalera, O. Pujol, and P. Radeva. Separability of ternary codes for sparse designs of error-. correcting output codes. Pattern Recognition Letters, 30(3):285-297, 2009. S. Escalera, O. Pujol, and P. Radeva. On the decoding process in ternary error-correcting output. codes. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 32(1):120-134, 2010. A. Graves, M. Liwicki, S. Fernandez, R. Bertolami, H. Bunke, and J. Schmidhuber. A Novel Con- nectionist System for Unconstrained Handwriting Recognition. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 31(5):855-868, 2009.\nN. J. Nilsson. Learning Machines. McGraw-Hill, 1965.. A. Passerini, M. Pontil, and P. Frasconi. New results on error correcting output codes of kernel machines. Neural Networks, IEEE Transactions on, 15(1):45-54, 2004.. O. Pujol, P. Radeva, and J. Vitria. Discriminant ECOC: a heuristic method for application depen. dent design of error correcting output codes. Pattern Analysis and Machine Intelligence, IEEE. Transactions on, 28(6):1007-1012, 2006. O. Pujol, S. Escalera, and P. Radeva. An incremental node embedding technique for error correcting output codes. Pattern Recognition, 41(2):713-725, 2008. D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning representations by back-propagating. errors. Cognitive modeling, 5:3, 1988. H. Sak, A. Senior, and F. Beaufays. Long short-term memory recurrent neural network architectures for large scale acoustic modeling. In INTERSPEECH, pp. 338-342, 2014. K. Simonyan and A. Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recog. nition. CoRR, abs/1409.1556, 2014. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Ra-. binovich. Going Deeper with Convolutions. CoRR, abs/1409.4842, 2014.. G. Trigeorgis, K. Bousmalis, S. Zafeiriou, and B. Schuller. A deep semi-nmf model for learning. hidden representations. In ICML, pp. 1692-1700, 2014. P. Vincent, H. Larochelle, Y. Bengio, and P.-A.Manzagol. Extracting and composing robust features. with denoising autoencoders. In ICML, pp. 1096-1103, 2008.\n4418-4427, 2015. S. Yan, D. Xu, B. Zhang, H.-J. Zhang, Q. Yang, and S. Lin. Graph embedding and extensions. a general framework for dimensionality reduction. Pattern Analysis and Machine Intelligence,. IEEE Transactions on, 29(1):40-51, 2007. Y. Yuan, L. Mou, and X. Lu. Scene Recognition by Manifold Regularized Deep Learning Architec. ture. Neural Networks and Learning Systems, IEEE Transactions on, 26(10):2222-2233, 2015. X. Zhang, J. Zhao, and Y. LeCun. Character-level convolutional networks for text classification. In NIPS, pp. 649-657, 2015. Y. Zheng, G. Zhong, J. Liu, X. Cai, and J. Dong. Visual texture perception with feature learning. models and deep architectures. In Pattern Recognition, pp. 401-410. Springer, 2014. Y. Zheng, Y. Cai, G. Zhong, Y. Chherawala, Y. Shi, and J. Dong. Stretching deep architectures for. text recognition. In ICDAR, pp. 236-240, 2015. G. Zhong and M. Cheriet. Adaptive error-correcting output codes. In IJCAI, 2013. G. Zhong and C.-L. Liu. Error-correcting output codes based ensemble feature extraction. Pattern. Recognition, 46(4):1091-1100, 2013. G. Zhong, K. Huang, and C.-L. Liu. Joint learning of error-correcting output codes and dichotomiz- ers from data. Neural Computing and Applications, 21(4):715-724, 2012.\nG. Zhong and C.-L. Liu. Error-correcting output codes based ensemble feature extraction. Pattern Recognition, 46(4):1091-1100, 2013. G. Zhong, K. Huang, and C.-L. Liu. Joint learning of error-correcting output codes and dichotomiz- ers from data. Neural Computing and Applications, 21(4):715-724, 2012."}] |
SkpSlKIel | [{"section_index": "0", "section_name": "WHY DEEP NEURAL NETWORKS FOR FUNCTION AP PROXIMATION?", "section_text": "Shivu Liang & R. Srikant\nsliang26,rsrikant}@illinois.edu"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Neural networks have drawn significant interest from the machine learning community, especiall due to their recent empirical successes (see the surveys (Bengio, 20o9)). Neural networks are use to build state-of-art systems in various applications such as image recognition, speech recognitior natural language process and others (see, Krizhevsky et al. 2012;Goodfellow et al 2013; Wa et al 2013, for example). The result that neural networks are universal approximators is one of the theoretical results most frequently cited to justify the use of neural networks in these applications Numerous results have shown the universal approximation property of neural networks in approxi mations of different function classes, (see, e.g., Cybenko1989; Hornik et al1989; Funahashi989 Hornik1991; Chui & L11992; Barron 1993; Poggio et al.2015)."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Recently there has been much interest in understanding why deep neural networks are preferred to shallow networks. We show that, for a large class of piecewise smooth functions, the number of neurons needed by a shallow network to ap- proximate a function is exponentially larger than the corresponding number of neurons needed by a deep network for a given degree of function approximation. First, we consider univariate functions on a bounded interval and require a neural network to achieve an approximation error of e uniformly over the interval. We show that shallow networks (i.e., networks whose depth does not depend on e) require (poly(1/e)) neurons while deep networks (i.e., networks whose depth grows with 1/e) require O(polylog(1/e)) neurons. We then extend these results to certain classes of important multivariate functions. Our results are derived for neural networks which use a combination of rectifier linear units (ReLUs) and bi- nary step units, two of the most popular type of activation functions. Our analysis builds on a simple observation: the multiplication of two bits can be represented by a ReLU.\nAll these results and many others provide upper bounds on the network size and assert that small. approximation error can be achieved if the network size is sufficiently large. More recently, there has been much interest in understanding the approximation capabilities of deep versus shallow networks.. Delalleau & Bengio (011) have shown that there exist deep sum-product networks which cannot be approximated by shallow sum-product networks unless they use an exponentially larger amount of units or neurons.Montufar et al. (014) have shown that the number of linear region increases exponentially with the number of layers in the neural network. Telgarsky (016) has established such a result for neural networks, which is the subject of this paper. Eldan & Shamir (015) have. shown that, to approximate a specific function, a two-layer network requires an exponential number. of neurons in the input dimension, while a three-layer network requires a polynomial number of. neurons. These recent papers demonstrate the power of deep networks by showing that depth can lead to an exponential reduction in the number of neurons required, for specific functions or specific. neural networks. Our goal here is different: we are interested in function approximation specifically\nand would like to show that for a given upper bound on the approximation error, shallow network require exponentially more neurons than deep networks for a large class of functions\nThe outline of this paper is as follows. In Section , we present necessary definitions and the problem statement. In Section 3, we present upper bounds on network size, while the lower bound is provided in Section Conclusions are presented in Section Around the same time that our paper was uploaded in arxiv, a similar paper was also uploaded in arXiv by Yarotsky (2016). The results in the two papers are similar in spirit, but the details and the general approach are substantially different.\nIn this section, we present definitions on feedforward neural networks and formally present the problem statement."}, {"section_index": "3", "section_name": "2.1 FEEDFORWARD NEURAL NETWORKS", "section_text": "A feedforward neural network is composed of layers of computational units and defines a unique function f : Rd -> R. Let L denote the number of hidden layers, N, denote the number of units of 1 input of neural network, z' denote the output of the jth unit in layer l, w, denote the weight of the edge connecting unit i in layer l and unit j in layer l + 1, b, denote the bias of the unit j in layer l Then outputs between layers of the feedforward neural network can be characterized by following iterations:\nd input layer: j E [Ni]] C i=1 NL output layer: f(x) = i=1\nHere, o() denotes the activation function and [n] denotes the index set [n] = {1, ..., n}. In thi paper, we only consider two important types of activation functions:\nRectifier linear unit: o(x) = max{0, x}, x E R Binary step unit: o(x) = I{x > 0}, x E R.\nThe multilayer neural networks considered in this paper are allowed to use either rectifier linear units (ReLU) or binary step units (BSU), or any combination of the two. The main contributions of. this paper are\nWe have shown that, for c-approximation of functions with enough piecewise smoothness, a. multilayer neural network which uses O(log(1/e)) layers only needs O(poly log(1/e)) neurons. while N(poly(1/s)) neurons are required by neural networks with o(log(1/e)) layers. In other. words, shallow networks require exponentially more neurons than a deep network to achieve the. level of accuracy for function approximation.. We have shown that for all differentiable and strongly convex functions, multilayer neural net-. works need N(log(1/e)) neurons to achieve an e-approximation. Thus, our results for deep net-. works are tight.\nNi l+1 l E[L-1],jE[Ni+1] 0 i=1 input layer: z. j E [Ni]] i=1 NL output layer: f(x) = i=1\nNi l E[L-1],j E[Ni+1] i=1\nWe call the number of layers and the number of neurons in the network as the depth and the size. f the feedforward neural network, respectively. We use the set F(N, L) to denote the function. set containing all feedforward neural networks of depth L, size N and composed of a combination\nx - xo X1 x-xo 2 2i Xn n Xi i=0 2 1st layer. 2nd layer. nth layer : binary step unit : adder\nFigure 1: An n-layer neural network structure for finding the binary pansion of a number in I0. 1\nof rectifier linear units (ReLUs) and binary step units. We say one feedforward neural network is deeper than the other network if and only if it has a larger depth. Through this paper, the terms feedforward neural network and multilayer neural network are used interchangeably."}, {"section_index": "4", "section_name": "2.2 PROBLEM STATEMENT", "section_text": "min If-fllE. fEF(N,L)\nSpecifically, we aim to answer the following questions.\nThe first question asks what depth and size are sufficient to guarantee an e-approximation. The. second question asks, for a fixed depth, what is the minimum size of a neural network require. to guarantee an c-approximation. Obviously, tight bounds in the answers to these two questions. provide tight bounds on the network size and depth required for function approximation. Besides. solutions to these two questions together can be further used to answer the following question. If deeper neural network of size Na and a shallower neural network of size N, are used to approximate. the same function with the same error, then how fast does the ratio Na/Ns decay to zero as the erro. decays to zero?\nIn this section, we present upper bounds on the size of the multilayer neural network which are sufficient for function approximation. Before stating the results, some notations and terminology deserve further explanation. First, the upper bound on the network size represents the number oi neurons required at most for approximating a given function with a certain error. Secondly, th notion of the approximation is the Loo distance: for two functions f and g, the Lo distance betweer these two function is the maximum point-wise disagreement over the cube [0. 1ld"}, {"section_index": "5", "section_name": "3.1 APPROXIMATION OF UNIVARIATE FUNCTIONS", "section_text": "In this subsection, we present all results on approximating univariate functions. We first present a theorem on the size of the network for approximating a simple quadratic function. As part of the proof, we present the structure of the multilayer feedforward neural network used and show how the neural network parameters are chosen. Results on approximating general functions can be found ir Theorem D and .\nIn this paper, we focus on bounds on the size of the feedforward neural network function approx- imation. Given a function f, our goal is to understand whether a multilayer neural network f of depth L and size N exists such that it solves\n1 Does there exists L(e) and N(e) such that () is satisfied? We will refer to such L(e) and. N(e) as upper bounds on the depth and size of the required neural network.. 2 Given a fixed depth L, what is the minimum value of N such that () is satisfied? We will. refer to such an N as a lower bound on the size of a neural network of a given depth L\nProof. The proof is composed of three parts. For any x E [0, 1], we first use the multilayer neura\nall i > 0. It is straightforward to see that the n-layer neural network shown in Figure can be used to find xo, ..., xn.\n2 n n 12 n n Xi 1 8/2 f(x) ) N |2 Xi max 0,2(xi1) 2i 0 i=0\nFinally, we consider the approximation error of this multilayer neural network.\n2 n n 8 Xi 1 Xi Xi fx)-f(x)= x2 <2|x = 2 2i 2i 2i 2n-1 i=0 i=0 i=n+1\nNext, a theorem on the size of the network for approximating general polynomials is given as fol. lows.\nTheorem 2. For polynomials f(x) = =o a;x', x E [0, 1] and =1 |ai| 1, there exists a mul tilayer neural network f(x) with O (p + log ) layers, O (log ) binary step units and O (plog P rectifier linear units such that |f(x) - f(x)| < e, Vx E [0, 1].\nProof. The proof is composed of three parts. We first use the deep structure shown in Figure to find the n-bit binary expansion t=o ayx' of x. Then we construct a multilayer network to approximate. polynomials g(x) = x', i = 1, ..., p. Finally, we analyze the approximation error..\nn n 1 n n n 52 812 1 Jm+1 max 0 x 9m 2j 9m )J - 0 0 j=0 i= 0\nn X j X j p fx)-fx|= |ail: aigi a;x gi 2j 2j -1 0 0 i=0 i=0\nThe third equality follows from the fact that x; E {0, 1} for all i. Therefore, the function f(x) can. be implemented by a multilayer network containing a deep structure shown in Figure and another hidden layer with n rectifier linear units. This multilayer neural network has O(n) layers, O(n). binary step units and O(n) rectifier linear units.\nClearly, the equation () defines iterations between the outputs of neighbor layers. Therefore, the. deep neural network shown in Figure can be used to implement the iteration given by (). Further. to implement this network. one should use O(p) layers with O(pn) rectifier linear units in total. We\nxo CO CO C1 X1 C1 2 X2 C2 X2 ReL ReLt ReLI ReLl Xn ReLU ReLU 91 2 92 93 2 =0 =0\nX1 X1 2 2 L ReLU ReL ReLU Xn C C 91 12 93 ) 9p ) 9p =0 i=0\nFigure 2: The implementation of polynomial function\nThis indicates, to achieve c-approximation error, one should choose n =log ? + 1. Besides. since we used O(n + p) layers with O(n) binary step units and O(pn) rectifier linear units in total, this multilayer neural network thus has O (p + log ) layers, O (log ?) binary step units and. O (p log ?) rectifier linear units.\nIn Theorem , we have shown an upper bound on the size of multilayer neural network for approxi mating polynomials. We can easily observe that the number of neurons in network grows as p log p with respect to p, the degree of the polynomial. We note that both Andoni et al (2014) and Barron (993) showed the sizes of the networks grow exponentially with respect to p if only 3-layer neural networks are allowed to be used in approximating polynomials.\nBesides, every function f with p + 1 continuous derivatives on a bounded set can be approximatec easily with a polynomial with degree p. This is shown by the following well known result of La grangian interpolation. By this result, we could further generalize Theorem . The proof can be found in the reference (Gil et a, 2007)\n1 Rnll=llf-Pn< 2n(n+1\nwhere f(n)(x) is the derivative of f of the nth order and the norm llfll is the loo norm |f|| = maxxE[-1,1] f(x).\nTheorem 4. Assume that function f is continuous on [0, 1] and log ? + 1 times differentiable in (0, 1). Let f(n) denote the derivative of f of nth order and |/f|| = maxxe[0,1] f(x). If |f(n)|| n! holds for all n E log ?+ 1|, then there exists a deep neural network f with O (log ) layers O (log ) binary step units, O ((log ) ) rectifier linear units such that\nProof. Let N = [log ?]. From Lemma , it follows that there exists polynomial Py of degree N such that for any x E [0, 1],\n+1 1 f(x)Pn(x) 2N(N+1)! 2N\nLemma 3 (Lagrangian interpolation at Chebyshev points). If a function f is defined at point. Zo, ..., Zn, Zi = cos((i + 1/2)/(n + 1)), i E [n], there exists a polynomial of degree not more than n such that Pn(zi) = f(zi), i = 0,...,n. This polynomial is given by Pn(x) = i=o f(zi)Li(x) T n+1 (x) -1, 1 and n + 1 times differentiable in (-1, 1), then 1 || Rn| = ||f - Pn |\nshow the implementation of this function. Let x = AV 3t. The error can now be upper bounded by\nf(x)f(x)=|f(x)-Pv(x)|f(x)-fx)+|fx)-Pv(x) 1 1 X i 2i 2N / 2N i=0\nIn the following, we describe the implementation of f by a multilayer neural network. Since Py is a polynomial of degree N, function f can be rewritten as.\nCorollary 5 (Function addition). Suppose that all functions h1,...,hs satisfy the conditions in Theorem and the vector E {w E Rk : l|wll, = 1}, then for the linear combination. f = ;=1 Bihi, there exists a deep neural network f with O (log ) layers, O (log ) binary. step units, O ((log 1 rectifier linear units such that [f(x) f| c, Vx E [0, 1].\nRemark: Clearly, Corollary follows directly from the fact that the linear combination f satisfie the conditions in Theorem if all the functions h1,...,hk satisfy those conditions. We note here tha the upper bound on the network size for approximating linear combinations is independent of k, the number of component functions.\nRemark: Proofs of Corollary 6 and can be found in the appendix. We observe that different from the case of linear combinations, the upper bound on the network size grows as k2 log? k in the case of function multiplications and grows as k2 (log ) in the case of function compositions where k is the number of component functions.\nN N N X i X i f(x) = Pv CnJn 2i 2i i=0 n=0 i=0\nfor some coefficients co, ..., C and gn = x\", n E [N]. Hence, the multilayer neural network shown. in the Figure can be used to implement f(x). Notice that the network uses O(N) layers with O(N) binary step units in total to decode xo,..,xy and O(N) layers with O(N2) rectifier linear units in. total to construct the polynomial Py. Substituting N = log ? , we have proved the theorem..\nRemark: Note that, to implement the architecture in Figure using the definition of a feedforward neural network in Section , we need the gi, i E [p] at the output. This can be accomplished by using O(p2) additional ReLUs. Since p = O(log(1/e)), this doesn't change the order result in Theorem .\nTheorem shows that any function f with enough smoothness can be approximated by a multilayer. neural network containing polylog() neurons with e error. Further, Theorem can be used to. show that for functions h1,.,hk with enough smoothness, then linear combinations, multiplications. and compositions of these functions can as well be approximated by multilayer neural networks. containing polylog() neurons with e error. Specific results are given in the following corollaries.\nIn this subsection, we have shown a polylog() upper bound on the network size for c-. approximation of both univariate polynomials and general univariate functions with enough smooth-. ness. Besides, we have shown that linear combinations, multiplications and compositions of uni variate functions with enough smoothness can as well be approximated with e error by a multilayer neural network of size polylog (). In the next subsection, we will show the upper bound on the. network size for approximating multivariate functions.."}, {"section_index": "6", "section_name": "3.2 APPROXIMATION OF MULTIVARIATE FUNCTIONS", "section_text": "In this subsection, we present all results on approximating multivariate functions. We first present. a theorem on the upper bound on the neural network size for approximating a product of multi- variate linear functions. We next present a theorem on the upper bound on the neural network size for approximating general multivariate polynomial functions. Finally, similar to the results in the. univariate case, we present the upper bound on the neural network size for approximating the linear. combination, the multiplication and the composition of multivariate functions with enough smooth. ness.\nRemark: The proof is given in the appendix. By further analyzing the results on the network size we obtain the following results: (a) fixing degree p, N(d, s) = O (dp+1 log ) as d -> oo and (b) fixing input dimension d, N(p, s) = O (pd log ) as p -> oo. Similar results on approximating multivariate polynomials were obtained by Andoni et al (2014) and Barron (1993). Barron (1993) showed that on can use a 3-layer neural network to approximate any multivariate polynomial with degree p, dimension d and network size dP /E2. Andoni et al (014) showed that one could use the gradient descent to train a 3-layer neural network of size d2p /e2 to approximate any multivariate polynomial. However, Theorem shows that the deep neural network could reduce the network size from O (1/e) to O (log ) for the same e error. Besides, for a fixed input dimension d, the size of the 3-layer neural network used by Andoni et a (2014) and Barron (1993) grows exponentially with respect to the degree p. However, the size of the deep neural network shown in Theorem grows only polynomially with respect to the degree. Therefore, the deep neural network could reduce the network size from O(exp(p)) to O(poly(p)) when the degree p becomes large.\nTheorem shows an upper bound on the network size for approximating multivariate polynomials Further, by combining Theorem and Corollary , we could obtain an upper bound on the network size for approximating more general functions. The results are shown in the following corollary\nTheorem 8 shows an upper bound on the network size for e-approximation of a product of multi. variate linear functions. Furthermore, since any general multivariate polynomial can be viewed as a linear combination of products, the result on general multivariate polynomials directly follows from. Theorem 8.\np+d-1 od N(d,p,s) = p log d - 1 E\n+d- 00 N(k,p,d, s) = O log\nRemark: Corollary 0 shows an upper bound on network size for approximating compositions of. multivariate polynomials and general univariate functions. The upper bound can be loose due to the assumption that l(x) is a general multivariate polynomials of degree p. For some specific cases, the upper bound can be much smaller. We present two specific examples in the Appendix and\nIn this subsection, we have shown that a similar polylog () upper bound on the network size for e-approximation of general multivariate polynomials and functions which are compositions of uni. variate functions and multivariate polynomials.\nThe results in this section can be used to find a multilayer neural network of size polylog() which. provides an approximation error of at most e. In the next section, we will present lower bounds on the. network size for approximating both univariate and multivariate functions. The lower bound together. with the upper bound shows a tight bound on the network size required for function approximations\nWhile we have presented results in both the univariate and multivariate cases for smooth functions, the results automatically extend to functions that are piecewise smooth, with a finite number of pieces. In other words, if the domain of the function is partitioned into regions, and the function is sufficiently smooth (in the sense described in the Theorems and Corollaries earlier) in each of the regions, then the results essentially remain unchanged except for an additional factor which will depend on the number of regions in the domain."}, {"section_index": "7", "section_name": "LOWER BOUNDS ON FUNCTION APPROXIMATIONS", "section_text": "In this section, we present lower bounds on the network size in function for certain classes of func. tions. Next, by combining the lower bounds and the upper bounds shown in the previous section, we could analytically show the advantages of deeper neural networks over shallower ones. The theorem. below is inspired by a similar result (DasGupta & Schnitger, 993) for univariate quadratic func-. tions, where it is stated without a proof. Here we show that the result extends to general multivariate. strongly convex functions.\nTheorem 11. Assume function f : [0, 1|d -> R is differentiable and strongly convex with parameter . Assume the multilayer neural network f is composed of rectifier linear units and binary step units. If |f(x) - f(x)| e, Vx E [0, 1]d, then the network size N log2 (16e) .\nRemark: The proof is in the Appendix . Theorem shows that every strongly convex function cannot be approximated with error e by any multilayer neural network with rectifier linear units and binary step units and of size smaller than log2(/) - 4. Theorem together with Theorem di- rectly shows that to approximate quadratic function f(x) = x2 with error e, the network size should be of order O (log ). Further, by combining Theorem and Theorem , we could analytically show the benefits of deeper neural networks. The result is given in the following corollary.\nRemarks: (i) The strong convexity requirement can be relaxed: the result obviously holds if the function is strongly concave and it also holds if the function consists of pieces which are strongly convex or strongly concave. (ii) Corollary 2 shows that in the approximation of the same function. the size of the deep neural network Ns is only of polynomially logarithmic order of the size oi the shallow neural network Nd, i.e., Na = O(polylog(Ns)). Similar results can be obtained for multivariate functions on the type considered in Section 3.2"}, {"section_index": "8", "section_name": "5 CONCLUSIONS", "section_text": "In this paper, we have shown that an exponentially large number of neurons are needed for functior approximation using shallow networks, when compared to deep networks. The results are estab lished for a large class of smooth univariate and multivariate functions. Our results are establishec for the case of feedforward neural networks with ReLUs and binary step units."}, {"section_index": "9", "section_name": "ACKNOWLEDGMENTS", "section_text": "Y. Bengio. Learning deep architectures for ai. Foundations and trends in Machine Learning, 2009\nA. Gil, J. Segura, and N. M. Temme. Numerical methods for special functions. SIAM, 2007\nK. Hornik. Approximation capabilities of multilayer feedforward networks. Neural networks. 1991\nM. Telgarsky. Benefits of depth in neural networks. arXiv preprint arXiv:1602.04485, 2016\nTransactions on Information theory, 1993 Y. Bengio. Learning deep architectures for ai. Foundations and trends in Machine Learning, 2009.. C. K. Chui and X. Li. Approximation by ridge functions and neural networks with one hidden layer.. Journal of Approximation Theory, 1992. G. Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of control,. signals and systems, 1989. B. DasGupta and G. Schnitger. The power of approximating: a comparison of activation functions.. In NIPS, 1993. O. Delalleau and Y. Bengio. Shallow vs. deep sum-product networks. In NIPs, 2011.. R. Eldan and O. Shamir. The power of depth for feedforward neural networks. arXiv preprint. arXiv:1512.03965, 2015. K. I. Funahashi. On the approximate realization of continuous mappings by neural networks. Neural networks, 1989.\nL. Wan, M. Zeiler, S. Zhang, Y. LeCun, and R. Fergus. Regularization of neural networks using dropconnect. In ICML, 2013. D. Yarotsky. Error bounds for approximations with deep ReLU networks. arXiv preprini arXiv:1610.01145, 2016"}, {"section_index": "10", "section_name": "APPENDIX A PROOF OF COROLLARY 5", "section_text": "k k fx)-fx)|= Bihi(x) [i]|hi(x)-h(x)]=E i=1 i=1\nN N X i hj(x) = Cij Jj 2i j=0 i=0\nk N N X i f(x) = Cij 9j 2i i=1 j=0 i=0\nN k N N N f(x)= X i 4 Xi C;9j : 9j 2i . 2i j=0 i=1 i=0 j=0 i=0\nProof. Since f(x) = h1(x)h2(x)...h(x), then the derivative of f of order n is\nThen from Theorem 4, it follows that there exists a polynomial of Py degree N tha\nN+1 (N+k)N+k N (N +k 1)k-1(N+1)N+1\nk f(x)=Bihi(x). i=1\nn! Q1+..+ak=n Q10,...,Qk0\nn! ) n! ...Qk k - Q1!Q2 a1+...+ak=n a10,...,Qk0\n(N+1) 1 N +k Rn=f-PN (N +1)!2N 2V k - 1\nthen the error has an upper bound of\n2 N 2k log2 N and N > 4k + 2 log2\n4k log2 4k 4k log2 4k l(4k log2 4k) 2k log, 4k + log2 log2 4k log2 4k + log2 4k\n2 N > 4k log2 4k + 4k + 2 log2\nE llf -Pn I2\nN X i f(x) = Pv 2i i=0\nFurther, function f can be implemented by a multilayer neural network shown in Figure 2 and thi. network has at most O(N) layers, O(N) binary step units and O(N2) rectifier linear units.\nand T3(m - 1) (log. rectifier linear units such that -\nFm-1(x)-Fm-1(x)|< for x E [0, 1] 3\nRN| 2N\nwI2 l|Rv|\n2 N > k log2 N + 2k + log2\nN N N Xi X i X i f(x)f(x)|[f(x) + PN 2 2 2i i=0 i=0 i=0 N 22 <llf(1)ll x < I 2 E i=0\nF(x)-Fx)<E for x E [0, 1]\nFurther we assume the derivative of Fm-1 has an upper bound Fm-1 1. Then for Fm, sinc (x) can be rewritten as\nFm(x) = Fm-1(hm(x))\n[EmllIFm-1: [|hmll =\n3m 3m 3 T1(m) log3 =T1(m 1) log3 10g3 IW 3 E E 3m 3m T2(m) log3 =T2(m 1) log3 + T2(1) log3 E 2 m 3m \\ 2 + 13(1 O ns () and (), we could have for 2 < m < k\n1 1(k) =0 log T2(k) = O k log\nhm(x) -hm(x)| for x E [0, 1]] I3\nFmFm=F Fn Y Fm-1 3 E E E 3 3 3\n3m 3m 3 T log: (4) m (5) log 3m (6) log From iterations () and (), we could have for 2 m k, 1+ log3(1/e) 1 + log3(1/e) Ti(m) = Ti(m-1) + Ti(1) < Ti(m- 1) + Ti(1) m + log3(1/e) m 1+ log3(1/s) 1 + log3(1/e) T2(m) = T2(m -1) + T2(1) T2(m - 1) + T2(1) m + log3(1/s) m and thus. From the iteration (), we have for 2 m k,. 1+ log3(1/s) (1+ log3(1/e))3 T3(m) = T3(m- 1) + T3(1) T3(m - 1 m + log3(1/e) m2 and thus. Therefore, to approximate f = Fk, we need at most O (klog k log + log k (log ). layers O(k log k log + log k (log )?) binary step units and O (k2 (log )? + (log )) rectifier lin- ear units..\n1+ log3(1/e) (1 + log3(1/e))3 T3(m)=T3(m-1)+T31 T3(m-1) m + log3(1/e) m2\nT3(k) = O 10g"}, {"section_index": "11", "section_name": "APPENDIX D PROOF OF THEOREM 8", "section_text": "Proof. The proof is composed of two parts. As before, we first use the deep structure shown in Figure 1 to find the binary expansion of x and next use a multilayer neural network to approximate the polynomial.\nf(x) = f(x) = i=1 \\k= We further define gi(@) = II Wi i=1 \\k=1 Since for l = 1, ..., p - 1, g1(@) =II( I|wi||1 =1, i=1 \\k=1 i=1 then we can rewrite gi+1(x), l = 1, ..., p -- 1 into l+1 d gl+1(x) = W(l+1)k WikX W(l+1) k max 2(x(k)\np d f(x) =f(x) = Wik i=1 k=1\nd g() =II Wik i=1 k=1\n1 d 1 gi(x) =II H|wil =1, Wik i=1 k=1 i=1\nIn the rest of proof, we consider the approximation error. Since for k = .. d and Vx E [0. 11o\ndf(x) p p p 11 (wTx) |Wjk|P, Wjk dx(k) j=1 i=1,iFj j=1\nPd fx)-fx)=fx)fx)Vfx-x 2n\nf(x)-f(x)<E\nLet x: r(1) ) and w = (wi1, ..., wid). We could now use the deep structure shown in. x(k) the n-bit binary expansions of all x(k), k E [d], we need a multilayer neural network with n layers and dn binary units in total. Besides, we let x = ((1), ..., x(d). Now we define.\nl+1 Y 1 Jl+1(x) W(l+1)k 1+1 =1 W(l+1)k max (7 k=\nObviously, equation () defines a relationship between the outputs of neighbor layers and thus can. be used to implement the multilayer neural network. In this implementation, we need dn rectifier linear units in each layer and thus dnp rectifier linear units. Therefore, to implement function f(x). we need p + n layers, dn binary step units and dnp rectifier linear units in total..\napproximation, we thus use O d log Pd binary step units and O pd log pd rectifier linear units"}, {"section_index": "12", "section_name": "APPENDIX E PROOF OF THEOREM 9", "section_text": "ga(x)-ga(x)[ E\nSince the total number of multinomial is upper bounded by\np+d-1 p d -1\np+d-1 2 p log d -1 E\np+d- pd 2 p log d - 1 p->o C\np+d-1 na log d -> 0o. d -1\nProof. We first prove the univariate case d = 1. The proof is composed of two parts. We say the function g(x) has a break point at x = z if g is discontinuous at z or its derivative g' is discontinuous at z. We first present the lower bound on the number of break points M (e) that the multilayer neural network f should have for e-approximation of function f with error e. We next relate the number of break points M(e) to the network depth L and the size N.\n0 xo < x1 < x2 < x3 1\nWe now prove that if multilayer neural network f has no break point in [x1, x2], then f should have a break point in xo, x1 and a break point in x2, x3]. We prove this by contradiction. We assume the neural network f has no break points in the interval [xo, x3]. Since f is constructed by rectifier linear units and binary step units and has no break points in the interval [xo, x3], then f should be a linear function in the interval [xo, x3], i.e., f(x) = ax + b, x E [xo, x3] for some a and b. By assumption, since f approximates f with error at most e everywhere in [0, 1], then\nf(x1)ax1bE and f(x2)ax2-b E\nf(x2)-f(x1)-2a f(x2)-f(x1)+2a X2 X1 X2 X1\nf(x) = L Caga(x), a|a|p\nfx)-fx L [Ca: ga(x) ga(x) = E. a:|a|p\na < f'(x2)\nfx3)-fx3)=fx3)-ax3-b f(x3)-f(x2)-a(x3-x2)+f(x2)-ax-b f'(x2)(x3 x2) X3 - X =(f'(x2)-a)(x3 x2 > (2p1)E >\n1 M(e) 30 Vp> 1 4\n1 (N/L)L> p> 1 4 00\ng(y) = f(y,x and g(y) = f(u.x)\n1 2 L N log2 N > and 16E 16E\nRemark: We make the following remarks about the lower bound in the theorem\nfx2)-fx1) x2 -x1) f'(x2) X2 X1\n2pE 2E x2-x1= /puE = X2 - X1 X2 X1\nThe first inequality follows from strong convexity of f and f(x2) ax2 - b > E.The second inequality follows from the inequality (). Therefore, this leads to the contradiction. Thus there exists a break point in the interval [x2, x3]. Similarly, we could prove there exists a break point in the interval xo, x1. These indicate that to achieve e-approximation in [0, 1, the multilayer neural\nFurther, Telgarsky (016) has shown that the maximum number of break points that a multilayer neural network of depth L and size N could have is (N/L)L. Thus, L and NV should satisfy.\n1 2 L N > L 16E\nm N log2 log2 2 log2 m 16E 16E\n1) if the depth L is fixed, as in shallow networks, the number of neurons required is 2) if we are allowed to choose L optimally to minimize the lower bound, we will choose upper bound shown in Theorem ."}, {"section_index": "13", "section_name": "APPENDIX G PROOF OF COROLLARY 12", "section_text": "E 2 2\nProof. From Theorem , it follows that there exists a deep neural network fd of deptl d = O (log : ) and size\n2 1 Ng< c log\n1 2L s Ns > Lg 16E\n1 log Ns > log Ls + log 2L s 16E\nNa = O(L? log2 Ns)\nBy definition, a shallow neural network has a small number of layers, i.e., Ls. Thus, the size of the deep neural network is O(log- Ns). This means N < Ns..\nProof. It follows from the Theorem 4 that there exists d multilayer neural networks g1(x(1)), .., ga(x(d)) with O (log d) layers and O (dlog ) binary step units and O (dlog ) rec-. ifier linear units in total such that.\n+...+x(d)2 g1(x(1)) +... + ga(x(d) E < I2 2 2\ne-dx_f(x)| E Vx E [0,1]. 2\ng1(x()) +...+ ga(x f(x) = f 2\nBy inequalities (3) and (14), the the approximation error is upper bounded by\ng(t)-g(t)]<E, Vt E [0,1]]\nf(x) = g(t)\nfx)-fx)=gt)-g(t)<\nNow we have proved the corollary\n-=1gi(x -i=1gi(x\nNow the deep neural network has O(log?) layers, O(dlog?) binary step units and O (dlog d+ (1og 1) rectifier linear units."}] |
r1Ue8Hcxg | [{"section_index": "0", "section_name": "NEURAL ARCHITECTURE SEARCH WITI REINFORCEMENT LEARNING", "section_text": "Barret Zoph* Quoc V. Le\nGoogle Brain\nbarretzoph,qvl}@google.com"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The last few years have seen much success of deep neural networks in many challenging appli cations, such as speech recognition (Hinton et al.]2012), image recognition (LeCun et al.]1998 Krizhevsky et al.][2012) and machine translation (Sutskever et al.2014) Bahdanau et al.|2015|Wi et al.[2016). Along with this success is a paradigm shift from feature designing to architecture designing, i.e., from SIFT (Lowe||1999), and HOG (Dalal & Triggs|2005), to AlexNet (Krizhevsky et al.2012), VGGNet (Simonyan & Zisserman2014), GoogleNet (Szegedy et al.]2015), anc ResNet (He et al.|2016a). Although it has become easier, designing architectures still requires lot of expert knowledge and takes ample time.\nSample architecture A with probability p Trains a child network The controller (RNN) with architecture A to get accuracy R Compute gradient of p and scale it by R to update the controller\nFigure 1: An overview of Neural Architecture Search\nThis paper presents Neural Architecture Search, a gradient-based method for finding good architec tures (see Figure 1) . Our work is based on the observation that the structure and connectivity of a\nNeural networks are powerful and flexible models that work well for many diffi cult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a re- current network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that out- performs the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplex- ity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.\nneural network can be typically specified by a variable-length string. It is therefore possible to ust a recurrent network - the controller - to generate such string. Training the network specified by the string - the \"child network' on the real data will result in an accuracy on a validation set. Using this accuracy as the reward signal, we can compute the policy gradient to update the controller. As a result, in the next iteration, the controller will give higher probabilities to architectures that receive high accuracies. In other words, the controller will learn to improve its search over time.\nOur experiments show that Neural Architecture Search can design good models from scratch, ai achievement considered not possible with other methods. On image recognition with CIFAR-10 Neural Architecture Search can find a novel ConvNet model that is better than most human-invente architectures. Our CIFAR-10 model achieves a 3.65 test set error, while being 1.05x faster than the current best model. On language modeling with Penn Treebank, Neural Architecture Search car design a novel recurrent cell that is also better than previous RNN and LSTM architectures. The cel that our model found achieves a test set perplexity of 62.4 on the Penn Treebank dataset, which is 3.6 perplexity better than the previous state-of-the-art.\nHyperparameter optimization is an important research topic in machine learning, and is widely usec. in practice (Bergstra et al.2011} Bergstra & Bengio 2012]Snoek et al.2012}2015]Saxena & Verbeek|[2016). Despite their success, these methods are still limited in that they only search model from a fixed-length space. In other words, it is difficult to ask them to generate a variable-length. configuration that specifies the structure and connectivity of a network. In practice, these methods often work better if they are supplied with a good initial model (Bergstra & Bengio2012). Snoek et al.[2012 2015). There are Bayesian optimization methods that allow to search non fixed length. architectures (Bergstra et al.]2013] Mendoza et al.]2016), but they are less general and less flexible than the method proposed in this paper..\nModern neuro-evolution algorithms, e.g.,Wierstra et al.(2005);Floreano et al.(2008); Stanley et al. (2009), on the other hand, are much more flexible for composing novel models, yet they are usually. less practical at a large scale. Their limitations lie in the fact that they are search-based methods,. thus they are slow or require many heuristics to work well..\nNeural Architecture Search has some parallels to program synthesis and inductive programming, the idea of searching a program from examples (Summers[1977] Biermann][1978). In machine learning. probabilistic program induction has been used successfully in many settings, such as learning to solve simple Q&A (Liang et al.]2010] Neelakantan et al.] 2015] [Andreas et al.]2016), sort a list of numbers (Reed & de Freitas!2015), and learning with very few examples (Lake et al. 2015).\nThe controller in Neural Architecture Search is auto-regressive, which means it predicts hyperpa rameters one a time, conditioned on previous predictions. This idea is borrowed from the decode. in end-to-end sequence to sequence learning (Sutskever et al.|2014). Unlike sequence to sequence learning, our method optimizes a non-differentiable metric, which is the accuracy of the child net work. It is therefore similar to the work on BLEU optimization in Neural Machine Translation (Ran Zato et al. 2015 , Shen et al.| 2016). Unlike these approaches, our method learns directly from the reward signal without any supervised bootstrapping\nAlso related to our work is the idea of learning to learn or meta-learning (Thrun & Pratt2012), a general framework of using information learned in one task to improve a future task. More closely related is the idea of using a neural network to learn the gradient descent updates for another net work (Andrychowicz et al.]2016) and the idea of using reinforcement learning to find update policies for another network (Li & Malik|. 2016)."}, {"section_index": "2", "section_name": "3 METHODS", "section_text": "In the following section, we will first describe a simple method of using a recurrent network tc. generate convolutional architectures. We will show how the recurrent network can be trained witl. a policy gradient method to maximize the expected accuracy of the sampled architectures. We wil. present several improvements of our core approach such as forming skip connections to increase. model complexity and using a parameter server approach to speed up training. In the last part o\nthe section, we will focus on generating recurrent architectures, which is another key contributior of our paper.\n3.1 GENERATE MODEL DESCRIPTIONS WITH A CONTROLLER RECURRENT NEURAL NETWORK\nIn Neural Architecture Search, we use a controller to generate architectural hyperparameters of neural networks. To be flexible, the controller is implemented as a recurrent neural network. Let's suppose we would like to predict feedforward neural networks with only convolutional layers, we. can use the controller to generate their hyperparameters as a sequence of tokens:.\nFigure 2: How our controller recurrent neural network samples a simple convolutional network. I predicts filter height, filter width, stride height, stride width, and number of filters for one layer anc. repeats. Every prediction is carried out by a softmax classifier and then fed into the next time step. as input.\nIn our experiments, the process of generating an architecture stops if the number of layers exceeds a certain value. This value follows a schedule where we increase it as training progresses. Once the controller RNN finishes generating an architecture, a neural network with this architecture is built and trained. At convergence, the accuracy of the network on a held-out validation set is recorded. The parameters of the controller RNN, 0c, are then optimized in order to maximize the expected validation accuracy of the proposed architectures. In the next section, we will describe a policy gradient method which we use to update parameters 0c so that the controller RNN generates better architectures over time.\nThe list of tokens that the controller predicts can be viewed as a list of actions a1:T to design ar. architecture for a child network. At convergence, this child network will achieve an accuracy R o. a held-out dataset. We can use this accuracy R as the reward signal and use reinforcement learning. to train the controller. More concretely, to find the optimal architecture, we ask our controller tc. maximize its expected reward, represented by J(0c):\nSince the reward signal R is non-differentiable, we need to use a policy gradient method to iteratively update 0c. In this work, we use the REINFORCE rule from|Williams|(1992):\nT VocJ(0c) = Ep(a1:T;0c)Vec log P(at|a(t-1):1; 0c)R t=1\nAn empirical approximation of the above quantity is:\nm T 1 Ve log P(at|a(t-1):1;0c)Rk m k=1 t=1\nWhere m is the number of different architectures that the controller samples in one batch and T is the number of hyperparameters our controller has to predict to design a neural network architecture.\nNumber Filter Filter Stride Stride Number Filter of Filters Height Width Height Width of Filters Height ayer N-1 Layer N Layer N+\nNumber Filter Filter Stride Stride Number Filter of Filters Height Width Height Width of Filters Height N-1 Layer N Layer N+1\nJ(0c) = Ep(a1:r;0c)[R]\nThe above update is an unbiased estimate for our gradient, but has a very high variance. In order tc reduce the variance of this estimate we employ a baseline function:.\nm T 1 m k=1 t=1\nAs long as the baseline function 6 does not depend on the on the current action, then this is still ar unbiased gradient estimate. In this work, our baseline b is an exponential moving average of the previous architecture accuracies.\nAccelerate Training with Parallelism and Asynchronous Updates: In Neural Architecture Search, each gradient update to the controller parameters 0c corresponds to training one child net work to convergence. As training a child network can take hours, we use distributed training and asynchronous parameter updates in order to speed up the learning process of the controller (Dean et al.2012). We use a parameter-server scheme where we have a parameter server of S shards, that store the shared parameters for K controller replicas. Each controller replica samples m different child architectures that are trained in parallel. The controller then collects gradients according to the results of that minibatch of m architectures at convergence and sends them to the parameter server in order to update the weights across all controller replicas. In our implementation, convergence of each child network is reached when its training exceeds a certain number of epochs. This scheme of parallelism is summarized in Figure3\nTo enable the controller to predict such connections, we use a set-selection type attention (Neelakan. tan et al.[[2015) which was built upon the attention mechanism (Bahdanau et al.[[2015 [Vinyals et al. 2015). At layer N, we add an anchor point which has N - 1 content-based sigmoids to indicate the. previous layers that need to be connected. Each sigmoid is a function of the current hiddenstate of the controller and the previous hiddenstates of the previous N - 1 anchor points:.\nP(Laver i is an input to layer i) = sigmoid(* Wcurr * hi)\nwhere h; represents the hiddenstate of the controller at anchor point for the j-th layer, where J ranges from 0 to N - 1. We then sample from these sigmoids to decide what previous layers to be used as inputs to the current layer. The matrices Wprev, Wcurr and v are trainable parameters. As.\nParameter Parameter Parameter Server 1 Server 2 Server S Parameters 0. Controller Controller Controller Replica 1 Replica 2 Replica K Accuracy R Child Child Child Child Child Child Child Child Child Replica 1 Replica 2 Replica m Replica 1 Replica 2 Replica m Replica 1 Replica 2 Replica m\nFigure 3: Distributed training for Neural Architecture Search. We use a set of S parameter servers to store and send parameters to K controller replicas. Each controller replica then samples m archi tectures and run the multiple child models in parallel. The accuracy of each child model is recorded to compute the gradients with respect to 0c, which are then sent back to the parameter servers.\nIn Section|3.1, the search space does not have skip connections, or branching layers used in modern architectures such as GoogleNet (Szegedy et al.[2015), and Residual Net (He et al.[2016a). In this section we introduce a method that allows our controller to propose skip connections or branching layers, thereby widening the search space\nthese connections are also defined by probability distributions, the REINFORCE method still applies without any significant modifications. Figure|4|shows how the controller uses skip connections to decide what layers it wants as inputs to the current layer.\nNumber Anchor Filter Filter Stride Stride Anchor Number Filter of Filters Point Height Width Height Width Point of Filters Height Layer N-1 Layer N Layer N-\nFigure 4: The controller uses anchor points, and set-selection attention to form skip connections\nIn our framework, if one layer has many input layers then all input layers are concatenated in th depth dimension. Skip connections can cause \"compilation failures\"' where one layer is not compat ible with another layer, or one layer may not have any input or output. To circumvent these issues we employ three simple techniques. First, if a layer is not connected to any input layer then th image is used as the input layer. Second, at the final layer we take all layer outputs that have no been connected and concatenate them before sending this final hiddenstate to the classifier. Lastly if input layers to be concatenated have different sizes, we pad the small layers with zeros so that the. concatenated layers have the same sizes.\nFinally, in Section[3.1] we do not predict the learning rate and we also assume that the architectures consist of only convolutional layers, which is also quite restrictive. It is possible to add the learning. rate as one of the predictions. Additionally, it is also possible to predict pooling, local contrast normalization (Jarrett et al.|2009) Krizhevsky et al.]2012), and batchnorm (Ioffe & Szegedy]2015 in the architectures. To be able to add more types of layers, we need to add an additional step in the. controller RNN to predict the layer type, then other hyperparameters associated with it.."}, {"section_index": "3", "section_name": "3.4 GENERATE RECURRENT CELL ARCHITECTURES", "section_text": "In this section, we will modify the above method to generate recurrent cells. At every time step t. the controller needs to find a functional form for h, that takes x, and ht-1 as inputs. The simplest way is to have ht = tanh(W1 * xt + W2 * ht-1), which is the formulation of a basic recurrent cell. A more complicated formulation is the widely-used LSTM recurrent cell (Hochreiter & Schmidhuber 1997).\nThe computations for basic RNN and LSTM cells can be generalized as a tree of steps that take x. and ht-1 as inputs and produce ht as final output. The controller RNN needs to label each node ir. the tree with a combination method (addition, elementwise multiplication, etc.) and an activatior. function (tanh, sigmoid, etc.) to merge two inputs and produce one output. Two outputs are ther. fed as inputs to the next node in the tree. To allow the controller RNN to select these methods anc. functions. we index the nodes in the tree in an order so that the controller RNN can visit each nod one by one and label the needed hyperparameters..\nInspired by the construction of the LSTM cell (Hochreiter & Schmidhuber1997), we also need cell variables c-1 and ct to represent the memory states. To incorporate these variables, we need the controller RNN to predict what nodes in the tree to connect these two variables to. These predictions can be done in the last two blocks of the controller RNN.\nTo make this process more clear, we show an example in Figure[5] for a tree structure that has two leaf nodes and one internal node. The leaf nodes are indexed by O and 1, and the internal node is. indexed by 2. The controller RNN needs to first predict 3 blocks, each block specifying a combina- tion method and an activation function for each tree index. After that it needs to predict the last 2. blocks that specify how to connect ct and ct-1 to temporary variables inside the tree. Specifically,.\nFigure 5: An example of a recurrent cell constructed from a tree that has two leaf nodes (base 2). and one internal node. Left: the tree that defines the computation steps to be predicted by controller. Center: an example set of predictions made by the controller for each computation step in the tree Right: the computation graph of the recurrent cell constructed from example predictions of the. controller.\naccording to the predictions of the controller RNN in this example, the following computation steps will occur:\nIn the above example, the tree has two leaf nodes, thus it is called a \"base 2\" architecture. In our experiments, we use a base number of 8 to make sure that the cell is expressive..\nWe apply our method to an image classification task with CIFAR-10 and a language modeling task. with Penn Treebank, two of the most benchmarked datasets in deep learning. On CIFAR-10, our. goal is to find a good convolutional architecture whereas on Penn Treebank our goal is to find a good. recurrent cell. On each dataset, we have a separate held-out validation dataset to compute the reward. signal. The reported performance on the test set is computed only once for the network that achieves the best result on the held-out validation dataset. More details about our experimental procedures. and results are as follows."}, {"section_index": "4", "section_name": "4.1 LEARNING CONVOLUTIONAL ARCHITECTURES FOR CIFAR-1O", "section_text": "Dataset: In these experiments we use the CIFAR-10 dataset with data preprocessing and aug mentation procedures that are in line with other previous results. We first preprocess the data by whitening all the images. Additionally, we upsample each image then choose a random 32x32 crop of this upsampled image. Finally, we use random horizontal flips on this 32x32 cropped image..\nSearch space: Our search space consists of convolutional architectures, with rectified linear units. as non-linearities (Nair & Hinton2010), batch normalization (Ioffe & Szegedy2015) and skij. connections between layers (Section|3.3). For every convolutional layer, the controller RNN has tc. select a filter height in [1, 3, 5, 7], a filter width in [1, 3, 5, 7], and a number of filters in [24, 36, 48\nht Elem Elem Sig Add Tanh ReLU Add ReLU Mult Mult moid sigmoid Tree elem mult Index 2 relu relu add Tree Tree tanh Index 0 Index 1 elem mult add ht-1 Xt ht-1 Xt Tree Index 0 Tree Index 1 Tree Index 2 Cell Inject. Cell Indices ht-1 Ct-1\nht h Ct Elem Elem Sig Add Tanh ReLU Add ReLU Mult Mult moid sigmoid Tree elem mult Index 2 relu relu add Tree Tree tanh Index 0 Index 1 elem mult add 0 ht-1 Xt ht-1 Xt Tree Index 0 Tree Index Tree Index 2 Cell Inject Cell Indices ht-1 Ct-1\nThe controller predicts Add and Tanh for tree index 0, this means we need to compute. ao = tanh(W1 * xt + W2 * ht-1). The controller predicts ElemMult and ReLU for tree index 1, this means we need to. compute a1 = ReLU((W3 * xt) O (W4 * ht-1)) The controller predicts O for the second element of the \"Cell Index', Add and ReLU for. elements in \"Cell Inject\", which means we need to compute anew = ReLU(ao + ct-1). Notice that we don't have any learnable parameters for the internal nodes of the tree.. The controller predicts ElemMult and Sigmoid for tree index 2, this means we need to. compute a2 = sigmoid(anew O a1). Since the maximum index in the tree is 2, ht is set to a2. The controller RNN predicts 1 for the first element of the \"Cell Index\", this means that we should set ct to the output of the tree at index 1 before the activation, i.e., ct = (W3 * xt) O. (W4 * ht-1).\n64]. For strides, we perform two sets of experiments, one where we fix the strides to be 1, and on where we allow the controller to predict the strides in [1, 2, 3].\nTraining details: The controller RNN is a two-layer LSTM with 35 hidden units on each layer It is trained with the ADAM optimizer (Kingma & Ba2015) with a learning rate of O.0006. The weights of the controller are initialized uniformly between -0.08 and O.08. For the distributed train- ing, we set the number of parameter server shards S to 20, the number of controller replicas K to 100 and the number of child replicas m to 8, which means there are 800 networks being trained on 800 GPUs concurrently at any time.\nOnce the controller RNN samples an architecture, a child model is constructed and trained for 50 epochs. The reward used for updating the controller is the maximum validation accuracy of the last 5 epochs cubed. The validation set has 5,o00 examples randomly sampled from the training set, the remaining 45,000 examples are used for training. The settings for training the CIFAR-10 child models are the same with those used in Huang et al.(2016a). We use the Momentum Optimizer. with a learning rate of 0.1, weight decay of 1e-4, momentum of 0.9 and used Nesterov Momentum. (Sutskever et al.|2013).\nDuring the training of the controller, we use a schedule of increasing number of layers in the child networks as training progresses. On CIFAR-10, we ask the controller to increase the depth by 2 for. the child models every 1,600 samples, starting at 6 layers.\nResults: After the controller trains 12,800 architectures, we find the architecture that achieves the. best validation accuracy. We then run a small grid search over learning rate, weight decay, batchnorn. epsilon and what epoch to decay the learning rate. The best model from this grid search is then rur. until convergence and we then compute the test accuracy of such model and summarize the result in Table[1 As can be seen from the table, Neural Architecture Search can design several promising architectures that perform as well as some of the best models on this dataset..\nModel Depth Parameters Error rate (%) Network in Network (Lin et al.]2013) 8.81 All-CNN (Springenberg et al.]2014 7.25 Deeply Supervised Net (Lee et al.2015) 7.97 Highway Network (Srivastava et al.]2015 7.72 Scalable Bayesian Optimization (Snoek et al.2015) 6.37 FractalNet (Larsson et al.2016) 21 38.6M 5.22 with Dropout/Drop-path 21 38.6M 4.60 ResNet (He et al.,2016a 110 1.7M 6.61 ResNet (reported byHuang et al.2016c)) 110 1.7M 6.41 ResNet with Stochastic Depth (Huang et al.2016c) 110 1.7M 5.23 1202 10.2M 4.91 Wide ResNet (Zagoruyko & Komodakis,2016) 16 11.0M 4.81 28 36.5M 4.17 ResNet (pre-activation) (He et al.]2016b) 164 1.7M 5.46 1001 10.2M 4.62 DenseNet (L = 40, k = 12) Huang et al.2016a 40 1.0M 5.24 DenseNet(L = 100, k = 12)Huang et al. 2016a 100 7.0M 4.10 DenseNet (L = 100, k = 24)JHuang et al 2016a 100 27.2M 3.74 DenseNet-BC (L = 100, k = 40) Huang et al.(2016b) 190 25.6M 3.46 Neural Architecture Search v1 no stride or pooling. 15 4.2M 5.50 Neural Architecture Search v2 predicting strides. 20 2.5M 6.01 Neural Architecture Search v3 max pooling. 39 7.1M 4.47 Neural Architecture Search v3 max pooling + more filters 39 37.4M 3.65\nTable 1: Performance of Neural Architecture Search and other state-of-the-art models on CIFAR-10\nFirst, if we ask the controller to not predict stride or pooling, it can design a 15-layer architecture that achieves 5.50% error rate on the test set. This architecture has a good balance between accuracy and depth. In fact, it is the shallowest and perhaps the most inexpensive architecture among the top. performing networks in this table. This architecture is shown in Appendix [A] Figure[7] A notable. feature of this architecture is that it has many rectangular filters and it prefers larger filters at the. top layers. Like residual networks (He et al.]2016a), the architecture also has many one-step skip. connections. This architecture is a local optimum in the sense that if we perturb it, its performance becomes worse. For example, if we densely connect all layers with skip connections, its performance. becomes slightly worse: 5.56%. If we remove all skip connections, its performance drops to 7.97%\nFinally, if we allow the controller to include 2 pooling layers at layer 13 and layer 24 of the archi. tectures, the controller can design a 39-layer network that achieves 4.47% which is very close tc. the best human-invented architecture that achieves 3.74%. To limit the search space complexity we. have our model predict 13 layers where each layer prediction is a fully connected block of 3 layers. Additionally, we change the number of filters our model can predict from [24, 36, 48, 64] to [6, 12. 24, 36]. Our result can be improved to 3.65% by adding 40 more filters to each layer of our archi. tecture. Additionally this model with 40 filters added is 1.05x as fast as the DenseNet model tha achieves 3.74%, while having better performance. The DenseNet model that achieves 3.46% erro. rate (Huang et al.|2016b) uses 1x1 convolutions to reduce its total number of parameters, which we did not do, so it is not an exact comparison.."}, {"section_index": "5", "section_name": "4.2 LEARNING RECURRENT CELLS FOR PENN TREEBANK", "section_text": "Dataset: We apply Neural Architecture Search to the Penn Treebank dataset, a well-known bench. mark for language modeling. On this task, LSTM architectures tend to excel (Zaremba et al.]2014 Gall[2015), and improving them is difficult (Jozefowicz et al.][2015). As PTB is a small dataset, reg ularization methods are needed to avoid overfitting. First, we make use of the embedding dropou and recurrent dropout techniques proposed inZaremba et al. (2014) and (Gal[2015). We also try t combine them with the method of sharing Input and Output embeddings, e.g.,Bengio et al.(2003] Mnih & Hinton(2007), especially Inan et al.(2016) and Press & Wolf (2016). Results with thi method are marked with \"shared embeddings.\nSearch space: Following Section[3.4 our controller sequentially predicts a combination methoc. then an activation function for each node in the tree. For each node in the tree, the controlle RNN needs to select a combination method in [add, elem_mult] and an activation method ir. identity, tanh, sigmoid, relu]. The number of input pairs to the RNN cell is called the \"base. number\"' and set to 8 in our experiments. When the base number is 8, the search space is has ap. proximately 6 1016 architectures, which is much larger than 15,000, the number of architectures. that we allow our controller to evaluate..\nTraining details: The controller and its training are almost identical to the CIFAR-10 experiments. except for a few modifications: 1) the learning rate for the controller RNN is O.o005, slightly smalle. than that of the controller RNN in CIFAR-10, 2) in the distributed training, we set S to 20, K to 40C and m to 1, which means there are 400 networks being trained on 400 CPUs concurrently at any time, 3) during asynchronous training we only do parameter updates to the parameter-server once 10 gradients from replicas have been accumulated.\nIn our experiments, every child model is constructed and trained for 35 epochs. Every child model has two layers, with the number of hidden units adjusted so that total number of learnable parameters approximately match the \"medium' baselines (Zaremba et al.] 2014} Gal]2015). In these experi ments we only have the controller predict the RNN cell structure and fix all other hyperparameters. The reward function is plexity)2 where c is a constant, usually set at 80. validati\nAfter the controller RNN is done training, we take the best RNN cell according to the lowest val idation perplexity and then run a grid search over learning rate, weight initialization, dropout rates\nIn the second set of experiments, we ask the controller to predict strides in addition to other hyper-. parameters. As stated earlier, this is more challenging because the search space is larger. In this case, it finds a 20-layer architecture that achieves 6.01% error rate on the test set, which is not much worse than the first set of experiments..\nResults:In Table [2] we provide a comprehensive list of architectures and their performance on the PTB dataset. As can be seen from the table, the models found by Neural Architecture Search outperform other state-of-the-art models on this dataset, and one of our best models achieves a gain of almost 3.6 perplexity. Not only is our cell is better, the model that achieves 64 perplexity is also more than two times faster because the previous best network requires running a cell 10 times per time step (Zilly et al.2016).\nModel Parameters Test Perplexity Mikolov & Zweig 2012) - KN-5 2M+ 141.2 Mikolov & Zweig. 2012 - KN5 + cache 2M+ 125.7 Mikolov & Zweig. 2012) - RNN 6M+ 124.7 Mikolov & Zweig. 2012 - RNN-LDA 7M+ 113.7 Mikolov & Zweig (2012) - RNN-LDA + KN-5 + cache 9M* 92.0 Pascanu et al.(2013) - Deep RNN 6M 107.5 Cheng et al.(2014) - Sum-Prod Net 5M+ 100.0 (2014) - LSTM (medium) 20M 82.7 arembaet al (2014) - LSTM (large) Zaremba et al. 66M 78.4 Gal (2015 Variational LSTM (medium. untied). 20M 79.7 Gal 2015 Variational LSTM (medium, untied, MC). 20M 78.6 Gal 2015 - Variational LSTM (large, untied) 66M 75.2 Gal 2015 -Variational LSTM (large, untied, MC) 66M 73.4 Kim et al. 2015) - CharCNN 19M 78.9 Press & Wolf|(2016) - Variational LSTM, shared embeddings 51M 73.2 Merity et al. 2016) - Zoneout + Variational LSTM (medium) 20M 80.6 Merity et al. 2016) - Pointer Sentinel-LSTM (medium). 21M 70.9 2016) - VD-LSTM + REAL (large) 51M nan et al 68.5 Zilly et al [2016) - Variational RHN, shared embeddings 24M 66.0 Neural Architecture Search with base 8. 32M 67.9 Neural Architecture Search with base 8 and shared embeddings 25M 64.0 Neural Architecture Search with base 8 and shared embeddings. 54M 62.4\nTable 2: Single model perplexity on the test set of the Penn Treebank language modeling tasl Parameter numbers with # are estimates with reference to|Merity et al.(2016).\nThe newly discovered cell is visualized in Figure 8|in Appendix|A] The visualization reveals that the new cell has many similarities to the LSTM cell in the first few steps, such as it likes to compute W1 * ht-1 + W2 * xt several times and send them to different components in the cell..\nTransfer Learning Results: : To understand whether the cell can generalize to a different task, w. apply it to the character language modeling task on the same dataset. We use an experimental setuj. that is similar toHa et al.(2016), but use variational dropout by[Gal|(2015). We also train our ow. LSTM with our setup to get a fair LSTM baseline. Models are trained for 80K steps and the best tes. set perplexity is taken according to the step where validation set perplexity is the best. The result. on the test set of our method and state-of-art methods are reported in Table[3] The results on smal settings with 5-6M parameters confirm that the new cell does indeed generalize, and is better tha. the LSTM cell.\nAdditionally, we carry out a larger experiment where the model has 16.28M parameters. This model has a weight decay rate of 1e - 4, was trained for 600K steps (longer than the above models) and the test perplexity is taken where the validation set perplexity is highest. We use dropout rates of 0.2 and 0.5 as described in|Ga1|(2015), but do not use embedding dropout. We use the ADAM optimizer with a learning rate of O.001 and an input embedding size of 128. Our model had two layers with 800 hidden units. We used a minibatch size of 32 and BPTT length of 100. With this setting, our model achieves 1.214 perplexity, which is the new state-of-the-art result on this task.\nFinally, we also drop our cell into the GNMT framework (Wu et al.]2016), which was previously tuned for LSTM cells, and train an WMT14 English -> German translation model. The GNMT\nRNN Cell Type Parameters Test Bits Per Character. Ha et al. 2016) - Layer Norm HyperLSTM 4.92M 1.250 Ha et al. 2016 - Layer Norm HyperLSTM Large Embeddings 5.06M 1.233 Ha et al. 2016 - 2-Layer Norm HyperLSTM 14.41M 1.219 Two layer LSTM 6.57M 1.243 Two Layer with New Cell. 6.57M 1.228 Two Layer with New Cell. 16.28M 1.214\nTable 3: Comparison between our cell and state-of-art methods on PTB character modeling. Th new cell was found on word level language modeling\nnetwork has 8 layers in the encoder, 8 layers in the decoder. The first layer of the encoder ha bidirectional connections. The attention module is a neural network with 1 hidden layer. When . LSTM cell is used, the number of hidden units in each layer is 1024. The model is trained in a. distributed setting with a parameter sever and 12 workers. Additionally, each worker uses 8 GPU. and a minibatch of 128. We use Adam with a learning rate of 0.0002 in the first 60K training steps. and SGD with a learning rate of O.5 until 400K steps. After that the learning rate is annealed by. dividing by 2 after every 100K steps until it reaches 0.1. Training is stopped at 800K steps. More. details can be found inWu et al.(2016).\nControl Experiment 1 Adding more functions in the search space: To test the robustness of Neural Architecture Search. we add max to the list of combination functions and sin to the list of activation functions and rerun our experiments. The results show that even with a bigger search space, the model can achieve somewhat comparable performance. The best architecture with max and sin is shown in Figure|8|in Appendix|A\n40 Top 1 unique models 35 Top_5_unique_models Top_15_unique_models 30 25 20 7 10 5 0 5000 10000 15000 20000 25000 Iteratior\nTop 1 unique models 35 Top 5 unique models Top 15 unique models 30 25 20 15 10 5000 10000 15000 20000 25000 Iteration\nFigure 6: Improvement of Neural Architecture Search over random search over time. We plot th difference between the average of the top k models our controller finds vs. random search every 400. models run.\nIn our experiment with the new cell, we make no change to the above settings except for dropping in the new cell and adjusting the hyperparameters so that the new model should have the same compu-. tational complexity with the base model. The result shows that our cell, with the same computational complexity, achieves an improvement of O.5 test set BLEU than the default LSTM cell. Though this. improvement is not huge, the fact that the new cell can be used without any tuning on the existing GNMT framework is encouraging. We expect further tuning can help our cell perform better..\nControl Experiment 2 - Comparison against Random Search:Instead of policy gradient, one can use random search to find the best network. Although this baseline seems simple, it is often very hard to surpass (Bergstra & Bengio2012). We report the perplexity improvements using policy gradient against random search as training progresses in Figure 6 The results show that not only the best model using policy gradient is better than the best model using random search, but also the average of top models is also much better."}, {"section_index": "6", "section_name": "5 CONCLUSION", "section_text": "In this paper we introduce Neural Architecture Search, an idea of using a recurrent neural network to compose neural network architectures. By using recurrent network as the controller, our method is flexible so that it can search variable-length architecture space. Our method has strong empirical per- formance on very challenging benchmarks and presents a new research direction for automatically finding good neural network architectures. The code for running the models found by the controller on CIFAR-10 and PTB will be released at https://github.com/tensorflow/models . Additionally, we have added the RNN cell found using our method under the name NASCell into TensorFlow, so others can easily use it."}, {"section_index": "7", "section_name": "ACKNOWLEDGMENTS", "section_text": "We thank Greg Corrado, Jeff Dean, David Ha, Lukasz Kaiser and the Google Brain team for thei help with the project."}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Learning to compose neura networks for question answering. In NAACL. 2016\nYoshua Bengio, Rejean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilisti language model. JMLR, 2003\nJames Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. JMLR, 2012\nJames Bergstra, Remi Bardenet, Yoshua Bengio, and Balazs Kegl. Algorithms for hyper-parameter optimization. In NIPS, 2011.\nJames Bergstra, Daniel Yamins, and David D Cox. Making a science of model search: Hyperpa rameter optimization in hundreds of dimensions for vision architectures. ICML, 2013.\nNavneet Dalal and Bill Triggs. Histograms of oriented gradients for human detection. In CVPR 2005.\nJeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Andrew Senior. Paul Tucker, Ke Yang, Quoc V. Le, et al. Large scale distributed deep networks. In NIPs, 2012.\nDario Floreano, Peter Durr, and Claudio Mattiussi. Neuroevolution: from architectures to learning Evolutionary Intelligence, 2008.\nYarin Gal. A theoretically grounded application of dropout in recurrent neural networks. arXiv preprint arXiv:1512.05287, 2015.\nDavid Ha, Andrew Dai, and Quoc V. Le. Hypernetworks. arXiv preprint arXiv:1609.09106, 2016\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog nition. In CVPR, 2016a.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residua networks. arXiv preprint arXiv:1603.05027, 2016b\nWei-Chen Cheng, Stanley Kok, Hoai Vu Pham, Hai Leong Chieu, and Kian Ming Adam Chai Language modeling with sum-product networks. In INTERSPEECH, 2014.\nGeoffrey Hinton, Li Deng, Dong Yu, George E. Dahl, Abdel-rahman Mohamed, Navdeep Jaitl Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N. Sainath, et al. Deep neural networ for acoustic modeling in speech recognition: The shared views of four research groups. IEE Signal Processing Magazine, 2012.\nSepp Hochreiter and Juergen Schmidhuber. Long short-term memory. Neural Computation. 1997\nGao Huang, Zhuang Liu, and Kilian Q. Weinberger. Densely connected convolutional networks arXiv preprint arXiv:1608.06993, 2016a.\nGao Huang, Zhuang Liu, Kilian Q. Weinberger, and Laurens van der Maaten. Densely connectec convolutional networks. arXiv preprint arXiv:1608.06993, 2016b\nSergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015.\nKevin Jarrett, Koray Kavukcuoglu, Yann Lecun, et al. What is the best multi-stage architecture fo. object recognition? In ICCV, 2009.\nRafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. An empirical exploration of recurrent network architectures. In ICML, 2015.\nYoon Kim, Yacine Jernite, David Sontag, and Alexander M. Rush. Character-aware neural languag models. arXiv preprint arXiv:1508.06615, 2015.\nDiederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convo lutional neural networks. In NIPS. 2012.\nBrenden M. Lake, Ruslan Salakhutdinov, and Joshua B. Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 2015..\nGustav Larsson, Michael Maire, and Gregory Shakhnarovich. Fractalnet: Ultra-deep neural net works without residuals. arXiy preprint arXiv:1605.07648. 2016\neprint arXiv:1606.01885, 2016 Ke Li and Jitendra Malik. Learning to optimize. arXiv j\nMin Lin, Qiang Chen, and Shuicheng Yan. Network in network. In ICLR, 2013.\nDavid G. Lowe. Object recognition from local scale-invariant features. In CVPR, 1999\nStephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843, 2016.\nTomas Mikolov and Geoffrey Zweig. Context dependent recurrent neural network language model In SLT, pp. 234-239, 2012\nHakan Inan, Khashayar Khosravi, and Richard Socher. Tying word vectors and word classifiers: A loss framework for language modeling. arXiv preprint arXiv:1611.01462, 2016\nYann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied tc document recognition. Proceedings of the IEEE, 1998\nArvind Neelakantan, Quoc V. Le, and Ilya Sutskever. Neural programmer: Inducing latent programs with gradient descent. In ICLR, 2015.\nScott Reed and Nando de Freitas. Neural prc rammer-interpreters. In ICLR, 2015.\nShreyas Saxena and Jakob Verbeek. Convolutional neural fabrics. In NIPs, 2016\nShiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. Minimum risk training for neural machine translation. In ACL, 2016.\nJasper Snoek, Hugo Larochelle, and Ryan P. Adams. Practical Bayesian optimization of machine learning algorithms. In N1PS, 2012\nJasper Snoek, Oren Rippel, Kevin Swersky, Ryan Kiros, Nadathur Satish, Narayanan Sundaram Mostofa Patwary, Mostofa Ali, Ryan P. Adams, et al. Scalable bayesian optimization using deep neural networks. In ICML, 2015.\nJost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving fo simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014.\nRupesh Kumar Srivastava, Klaus Greff, and Jurgen Schmidhuber. Highway networks. arXiv preprint arXiv:1505.00387, 2015.\nKenneth O. Stanley, David B. D'Ambrosio, and Jason Gauci. A hypercube-based encoding fo evolving large-scale neural networks. Artificial Life, 2009.\nIlya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initializa tion and momentum in deep learning. In ICML, 2013.\nIlya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks In NIPS, 2014.\nSebastian Thrun and Lorien Pratt. Learning to learn. Springer Science & Business Media, 2012\nOriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In NIPs, 2015\nDaan Wierstra, Faustino J Gomez, and Jurgen Schmidhuber. Modeling systems with internal state using evolino. In GECCO, 2005.\nRazvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. How to construct deep recurrent neural networks. arXiv preprint arXiv:1312.6026. 2013\nYonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, et al. Google's neural machine translation system: Bridging the gap between human and machine translation.. arXiv preprint arXiv:1609.08144, 2016.\nSergey Zagoruyko and Nikos Komodakis. Wide residual networks. In BMvC, 2016\nJulian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutnik, and Jurgen Schmidhuber. Recurrent highway networks. arXiv preprint arXiv:1607.03474. 2016\nAPPENDIX Softmax ^ FH: 7 FW: 5 N: 48 FH: 7 FW: 5 N: 48 FH: 7 FW: 5 N: 48 FH: 7 FW: 7 N: 48 FH: 5 FW: 7 N: 36 FH: 7 FW: 7 N: 36 FH: 7 FW: 1 N: 36 FH: 7 FW: 3 N: 36 FH: 7 FW: 7 N: 48 FH: 7 FW: 7 N: 48 FH: 3 FW: 7 N: 48 FH: 5 FW: 5 N: 36 FH: 3 FW: 3 N: 36 FH: 3 FW: 3 N: 48 FH: 3 FW: 3 N: 36 Image\nFigure 7: Convolutional architecture discovered by our method, when the search space does not have strides or pooling layers. FH is filter height, FW is filter width and N is number of filters. Note that the skip connections are not residual connections. If one layer has many input layers then all input layers are concatenated in the depth dimension\nh identity tanh elem mult identity elem mult ( tanh tanh add elem mult add sigmoid( tanh identity tanh tanh sigmoid( add ( add add elem mult tanh elem_mult Oidentity elem mult relu tanh )sigmoid elem mult - tanh sigmoid sigmoid sigmoid tanh sigmoid C sigmoid relu add add Jadd Jadd ) add ) add add )add add add )add elem mult Xt ht-1 Ct-1 Xt ht-1 Ct-1 tanh elem_mult identity elem_mult sigmoid )elem mult sigmoid sigmoid( sigmoid7 identity add ( add add add identity tanh add tahh identity relu tahh tahh identity tanh add ) add add add )add max max max Xt ht-1 Ct-1\nFigure 8: A comparison of the original LSTM cell vs. two good cells our model found. Top left LSTM cell. Top right: Cell found by our model when the search space does not include max an sin. Bottom: Cell found by our model when the search space includes max and sin (the controlle did not choose to use the sin function)."}] |
SkxKPDv5xl | [{"section_index": "0", "section_name": "SAMPLERNN: AN UNCONDITIONAL END-TO-E NEURAL AUDIO GENERATION MODEL", "section_text": "Soroush Mehri\nKundan Kumar\nUniversity of Montreal\nUniversity of Montreal"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Audio generation is a challenging task at the core of many problems of interest, such as text-to speech synthesis, music synthesis and voice conversion. The particular difficulty of audio generation is that there is often a very large discrepancy between the dimensionality of the the raw audio signal and that of the effective semantic-level signal. Consider the task of speech synthesis, where we are typically interested in generating utterances corresponding to full sentences. Even at a relatively low sample rate of 16kHz, on average we will have 6,000 samples per word generated.\nTraditionally, the high-dimensionality of raw audio signal is dealt with by first compressing it into. spectral or hand-engineered features and defining the generative model over these features. However when the generated signal is eventually decompressed into audio waveforms, the sample quality is often degraded and requires extensive domain-expert corrective measures. This results in compli cated signal processing pipelines that are to adapt to new tasks or domains. Here we propose a step in the direction of replacing these handcrafted systems..\nIn this work, we investigate the use of recurrent neural networks (RNNs) to model the dependencies in audio data. We believe RNNs are well suited as they have been designed and are suited solutions for these tasks (see Graves (2013), Karpathy(2015), and Siegelmann (1999)). However, in practice it is a known problem of these models to not scale well at such a high temporal resolution as is found when generating acoustic signals one sample at a time, e.g., 16000 times per second. This is one of the reasons that[Oord et al.(2016) profits from other neural modules such as one presented by Yu & Koltun(2015) to show extremely good performance.\nIn this paper, an end-to-end unconditional audio synthesis model for raw waveforms is presented. while keeping all the computations tractable. Since our model has different modules operating. at different clock-rates (which is in contrast to WaveNet), we have the flexibility in allocating the. amount of computational resources in modeling different levels of abstraction. In particular, we. can potentially allocate very limited resource to the module responsible for sample level alignments\nIshaan Gulrajani\nRithesh Kumar\nUniversity of Montreal CIFAR Senior Fellow\nn this paper we propose a novel model for unconditional audio generation basec on generating one audio sample at a time. We show that our model, which profits rom combining memory-less modules, namely autoregressive multilayer percep rons, and stateful recurrent neural networks in a hierarchical structure is able tc apture underlying sources of variations in the temporal sequences over very lon ime spans, on three datasets of different nature. Human evaluation on the gener ted samples indicate that our model is preferred over competing models. We alsc how how each component of the model contributes to the exhibited performance\noperating at the clock-rate equivalent to sample-rate of the audio, while allocating more resources. in modeling dependencies which vary very slowly in audio, for example identity of phoneme being. spoken. This advantage makes our model arbitrarily flexible in handling sequential dependencies at multiple levels of abstraction.\nT-1 Ir p(X) = p(xi+1|x1,..,xi i=0\nht =H(ht-1,Xi=t p(xi+1|x1,...,xi) = Softmax(MLP(ht)\nRather than operating on individual samples, the higher-level modules in SampleRNN operate on non-overlapping frames of FS(k) (\"Frame Size') samples at the kth level up in the hierarchy at a. time (frames denoted by f(k)). Each frame-level module is a deep RNN which summarizes the. history of its inputs into a conditioning vector for the next module downward..\nThe variable number of frames we condition upon up to timestep t -- 1 is expressed by a fixed length. top tier k = K is simply the input frame. For intermediate tiers (1 < k < K) this input is a linear combination of conditioning vector from higher tier and current input frame. See Eqs.45\nBecause different modules operate at different temporal resolutions, we need to upsample each. vector c at the output of a module into a series of r(k) vectors (where r(k) is the ratio between the. temporal resolutions of the modules) before feeding it into the input of the next module downward (Eq.6. We do this with a set of r(k) separate linear projections.\n1. We present a novel method that utilizes RNNs at different scales to model longer term d pendencies in audio waveforms while training on short sequences which results in memor. efficiency during training 2. We extensively explore and compare variants of models achieving the above effect.. 3. We study and empirically evaluate the impact of different components of our model o. three audio datasets. Human evaluation also has been conducted to test these generativ. models.\nIn this paper we propose SampleRNN (shown in Fig.1), a density model for audio waveforms. SampleRNN models the probability of a sequence of waveform samples X = {x1, x2,..., xT}. (a random variable over input data sequences) as the product of the probabilities of each sample conditioned on all previous samples:\nwith H being one of the known memory cells, Gated Recurrent Units (GRUs) (Chung et al.| 2014) Long Short Term Memory Units (LSTMs) (Hochreiter & Schmidhuber1997), or their deep varia- tions (Section 3). However, raw audio signals are challenging to model because they contain struc- ture at very different scales: correlations exist between neighboring samples as well as between ones thousands of samples apart.\nSampleRNN helps to address this challenge by using a hierarchy of modules, each operating at a different temporal resolution. The lowest module processes individual samples, and each higher. module operates on an increasingly longer timescale and a lower temporal resolution. Each module. conditions the module below it, with the lowest module outputting sample-level predictions. The. entire hierarchy is trained jointly end-to-end by backpropagation..\nXi, ..., Xi+15 Xi+16,..., Xi+31 Tier 3 Xi+12,...,Xi+15 Xi+24,...,Xi+27 Xi+28,..,Xi+31 Xi+40,..., Xi+43 Tier 2 Tier 1 Xi+28,..., Xi+31 AR Xi+29,..., Xi+32 AR AR Xi+30,...,Xi+33 Xi+31,...,Xi+34 AR \\MLP MLP MLP MLP p(Xi+32 | x<i+32) p(xi+33x<i+33) p(xi+34 | x<i+34) p(xi+35| x<i+35)\nFigure 1: Snapshot of the unrolled model at timestep i with K = 3 tiers. As a simplification only one RNN and up-sampling ratio r = 4 is used for all tiers.\n1<k<K k = K ht = H(ht-1,in 1<j<r t-1*r+J\nOur approach of upsampling with r(k) linear projections is exactly equivalent to upsampling by. adding zeros and then applying a linear convolution. This is sometimes called \"perforated\"' upsam. pling in the context of convolutional neural networks (CNNs). It was first demonstrated to work well in|Dosovitskiy et al.[(2016) and is a fairly common upsampling technique.\n= flatten([ei-Fs1),...,ei-1] )= flatten([ei-Fs(1)+1,..., in ) = Softmax(MLP(inp)) p(Xi+1X1,..,X\nWe use a Softmax because we found that better results were obtained by discretizing the audic. signals (also see van den Oord et al.(2016)) and outputting a Multinoulli distribution rather than. using a Gaussian or Gaussian mixture to represent the conditional density of the original real-valued signal. When processing an audio sequence, the MLP is convolved over the sequence, processing.\nXi, ..., Xi+15 Xi+16, ...,Xi+31 Tier 3 Xi+12,..., Xi+15 Xi+24,...,Xi+27 Xi+28,...,Xi+31 Xi+40, ..., Xi+43 Tier 2 Tier 1 Xi+28,..., Xi+31 AR Xi+29, ..., Xi+32 AR Xi+30, ..., Xi+33 AR Xi+31,...,Xi+34 AR MLP \\MLP MLP MLP p(xi+32 | x<i+32) p(xi+33 x<i+33) p(xi+34| x<i+34) p(xi+35 | x<i+35)\nThe lowest module (tier k = 1; Eqs.7H9) in the SampleRNN hierarchy outputs a distribution over. c(k=2)from the a sample xi+1, conditioned on the FS(1) preceding samples as well as a vector c. next higher module which encodes information about the sequence prior to that frame. As FS(1) is. usually a small value and correlations in nearby samples are easy to model by a simple memoryless module, we implement it with a multilayer perceptron (MLP) rather than RNN which slightly speeds. up the training. Assuming e; represents x; after passing through embedding layer (section2.2.1) conditional distribution in Eq.1|can be achieved by following and for further clarity two consecutive sample-level frames are shown. In addition, W. in Eq.8|is simply used to linearly combine a frame. and conditioning vector from above\n(1) = flatten([e;-Fs(1),...,ei-1] = flatten([e-Fs(1)+1,..., Softmax(MLP(inp"}, {"section_index": "2", "section_name": "2.2.1 OUTPUT OUANTIZATION", "section_text": "To demonstrate the importance of a discrete output distribution, we apply the same architecture on. real-valued data by replacing the q-way Softmax with a Gaussian Mixture Models (GMM) outpu distribution. Table|2|shows that our model outperforms an RNN baseline even when both models use real-valued outputs. However, samples from the real-valued model are almost indistinguishable. from random noise.\nIn this work we use linear quantization with q = 256, corresponding to a per-sample bit depth of 8 Unintuitively, we realized that even linearly decreasing the bit depth (resolution of each audio sam- ple) from 16 to 8 can ease the optimization procedure while generated samples still have reasonable quality and are artifact-free.\nIn addition, early on we noticed that the model can achieve better performance and generation quality when we embed the quantized input values before passing them through the sample-level MLP (see. Table4). The embedding steps maps each of the q discrete values to a real-valued vector embedding. However, real-valued raw samples are still used as input to the higher modules.."}, {"section_index": "3", "section_name": "2.2.2 CONDITIONALLY INDEPENDENT SAMPLE OUTPUTS", "section_text": "To demonstrate the importance of a sample-level autoregressive module, we try replacing it witl \"Multi-Softmax\" (see Table 4), where the prediction of each sample x; depends only on the con ditioning vector c from Eq. In this configuration, the model outputs an entire frame of FS(1) samples at a time, modeling all samples in a frame as conditionally independent of each other. We find that this Multi-Softmax model (which lacks a sample-level autoregressive module) scores sig nificantly worse in terms of log-likelihood and fails to generate convincing samples. This suggest. that modeling the joint distribution of the acoustic samples inside each frame is very important ir order to obtain good acoustic generation. We found this to be true even when the frame size is re duced, with best results always with a frame size of 1, i.e., generating only one acoustic sample at a time.\nTraining recurrent neural networks on long sequences can be very computationally expensive.Oord et al. (2016) avoid this problem by using a stack of dilated convolutions instead of any recurrent con- nections. However, when they can be trained efficiently, recurrent networks have been shown to be very powerful and expressive sequence models. We enable efficient training of our recurrent model using truncated backpropagation through time, splitting each sequence into short subsequences and propagating gradients only to the beginning of each subsequence. We experiment with different subsequence lengths and demonstrate that we are able to train our networks, which model very long-term dependencies, despite backpropagating through relatively short subsequences.\nTable[3|shows that by increasing the subsequence length, performance substantially increases along side with train-time memory usage and convergence time. Yet it is noteworthy that our best models. have been trained on subsequences of length 512, which corresponds to 32 milliseconds, a small fraction of the length of a single a phoneme of human speech while generated samples exhibit. longer word-like structures.\neach window of FS(1) samples and predicting the next sample. At generation time, the MLP is run. repeatedly to generate one sample at a time. Table|1|shows a considerable gap between the baseline model RNN and this model, suggesting that the proposed hierarchically structured architecture of. SampleRNN makes a big difference.\nThe sample-level module models its output as a q-way discrete distribution over possible quantized values of x; (that is, the output layer of the MLP is a q-way Softmax).\nDespite the aforementioned fact, this generative model can mimic the existing long-term structure of the data which results in more natural and coherent samples that is preferred by human listeners (More on this in Sections 3.23.3]) This is due to the fast updates from TBPTT and specialized. frame-level modules (Section 2.1) with top tiers designed to model a lower resolution of signal while leaving the process of filling the details to lower tiers..\nIn this section we are introducing three datasets which have been chosen to evaluate the proposec architecture for modeling raw acoustic sequences. The description of each dataset and their prepro cessing is as follows:\nSee Fig.2|for a visual demonstration of examples from datasets and generated samples. For al the datasets we are using a 16 kHz sample rate and 16 bit depth. For the Blizzard and Musi datasets, preprocessing simply amounts to chunking the long audio files into 8 seconds long se quences on which we will perform truncated backpropagation through time. Each sequence in the Onomatopoeia dataset is few seconds long, ranging from 1 to 11 seconds. To train the models o1 this dataset, zero-padding has been applied to make all the sequences in a mini-batch have the sam length and corresponding cost values (for the predictions over the added Os) would be ignored whe. computing the gradients.\nWe particularly explored two gated variants of RNNs-GRUs and LSTMs. For the case of LSTMs the forget gate bias is initialized with a large positive value of 3, as recommended by|Zaremba|(2015 and|Gers(2001), which has been shown to be beneficial for learning long-term dependencies..\nAs for models that take real-valued input, e.g. the RNN-GMM and SampleRNN-GMM (with 4 components), normalization is applied per audio sample with the global mean and standard deviation obtained from the train split. For most of our experiments where the model demands discrete input. binning was applied per audio sample."}, {"section_index": "4", "section_name": "3.1 WAVENET RE-IMPLEMENTATION", "section_text": "We implemented the WaveNet architecture as described in Oord et al.(2016). Ideally, we would have liked to replicate their model exactly but owing to missing details of architecture and hyper parameters, as well as limited compute power at our disposal, we made our own design choices so that the model would fit on a single GPU while having a receptive field of around 250 milliseconds,\nBlizzard which is a dataset presented by Prahallad et al.(2013) for speech synthesis task. contains 315 hours of a single female voice actor in English; however, for our experiments we are using only 20.5 hours. The training/validation/test split is 86%-7%-7%.\nOnomatopoeia3| a relatively small dataset with 6,738 sequences adding up to 3.5 hours, is human vocal sounds like grunting, screaming, panting, heavy breathing, and coughing. Di- versity of sound type and the fact that these sounds were recorded from 51 actors and many categories makes it a challenging task. To add to that, this data is extremely unbalanced. The training/validation/test split is 92%-4%-4%.\nMusic dataset is the collection of all 32 Beethoven's piano sonatas publicly available on https : / /archive. org/ amounting to 10 hours of non-vocal audio. The training/val idation/test split is 88%-6%-6%.\nAll the models have been trained with teacher forcing and stochastic gradient decent (mini-batch size 128) to minimize the Negative Log-Likelihood (NLL) in bits per dimension (per audio sample). Gra- dients were hard-clipped to remain in [-1, 1] range. Update rules from the Adam optimizer (Kingma & Ba]2014) (1 = 0.9, , = 0.999, and e = 1e-8) with an initial learning rate of 0.001 was used to adjust the parameters. For training each model, random search over hyper-parameter val- ues (Bergstra & Bengiol2012) was conducted. The initial RNN state of all the RNN-based models was always learnable. Weight Normalization (Salimans & Kingma 2016) has been used for all the linear layers in the model (except for the embedding layer) to accelerate the training procedure. Size of the embedding layer was 256 and initialized by standard normal distribution. Orthogonal weight matrices used for hidden-to-hidden connections and other weight matrices initialized similar to He et al.(2015). In final model, we found GRU to work best (slightly better than LSTM). 1024 was the the number of hidden units for all GRUs (1 layer per tier for 3-tier and 3 layer for 2-tier model) and MLPs (3 fully connected layers with ReLU activation with output dimension being 1024 for first two layers and 256 for the final layer before softmax). Also FS(1) = FS(2) = 2 and FS(3) = 8 were found to result in lowest NLL.\nBlizzard Onomatopoeia Music Baep neta (eaa-e) (aaae) fatedeta wwwWwwwwwwwwwww wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww MMMWMMWW\nSaanmnN\nwwwwwwwwwwwwwwwwwwy Nwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww (alee)\nSannaamnN WWANAN\nFigure 2: Examples from the datasets compared to samples from our models. In the first 3 rows, 2 seconds of audio are shown. In the bottom 3 rows, 100 milliseconds of audio are shown. Rows 1 and 4 are ground truth from which one can see how the datasets look different and have complex structure in low resolution which the frame-level component of the SampleRNN is designed to capture. Samples also to some extent mimic the same global structure. At the same time, zoomed-in samples of our model shows that it can perfectly resemble the high resolution structure present in the data as well.\nModel Blizzard Onomatopoeia Music RNN (Eq.2) 1.434 2.034 1.410 WaveNet (re-impl.) 1.480 2.285 1.464 SampleRNN (2-tier) 1.392 2.026 1.076 SampleRNN (3-tier) 1.387 1.990 1.159\nwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwW\nTable 1: Test NLL in bits for three presented datasets. Model Blizzard Onomatopoeia Music RNN (Eq.2) 1.434 2.034 1.410 WaveNet (re-impl.) 1.480 2.285 1.464 SampleRNN (2-tier) 1.392 2.026 1.076 SampleRNN (3-tier) 1.387 1.990 1.159\nTable 1: Test NLL in bits for three presented datasets\nTable 2: Average NLL on Blizzard test set for real-valued models\nTable 3: Effect of subsequence length on NLL (bits per audio sample) computed on the Blizzard validation set.\nSubsequence Length 32 64 128 256 512 NLL Validation 1.575 1.468 1.412 1.391 1.364\nTable 4: Test (validation) set NLL (bits per audio sample) for Blizzard. Variants of SampleRNN are provided to compare the contribution of each component in performance..\nSampleRNN (2-tier) 1.392 (1.369) Without Embedding 1.566 (1.539) Multi-Softmax 1.685 (1.656)\nwhile having a reasonable number of updates per unit time. Although our model is very similar tc WaveNet, the design choices, e.g. number of convolution filters in each dilated convolution layer length of target sequence to train on simultaneously (one can train with a single target with all sam ples in the receptive field as input or with target sequence length of size T with input of size receptive field + T - 1), batch-size, etc. might make our implementation different from what the authors have done in the original WaveNet model. Hence, we note here that although we did our best at exactly reproducing their results, there would very likely be different choice of hyper-parameters between our implementation and the one of the authors.\nFor our WaveNet implementation, we have used 4 dilated convolution blocks each having 10 dilate convolution layers with dilation 1, 2, 4, 8 up to 512. Hence, our network has a receptive field of 4092 acoustic samples i.e. the parameters of multinomial distribution of sample at time step t, p(x) = fe(x-1, X-2,...X-4o92) where 0 is model parameters. We train on target sequence length of 1600 and use batch size of 8. Each dilated convolution filter has size 2 and the numbei of output channels is 64 for each dilated convolutional layer (128 filters in total due to gated non linearity). We trained this model using Adam optimizer with a fixed global learning rate of O.001 for Blizzard dataset and O.ooo1 for Onomatopoeia and Music datasets. We trained these models for about one week on a GeForce GTX TITAN X. We dropped the learning rate in the Blizzard experiment to 0.0001 after around 3 days of training.\nApart from reporting NLL, we conducted AB preference tests for random samples from four models. trained on the Blizzard dataset. For unconditional generation of speech which at best sounds like. mumbling, this type of test is the one which is more suited. Competing models were the RNN. SampleRNN (2-tier), SampleRNN (3-tier), and our implementation of WaveNet. The rest of the. models were excluded as the quality of samples were definitely lower and also to keep the numbe. of pair comparison tests manageable. We will release the samples that have been used in this test. tOO.\nAll the samples were set to have the same volume. Every user is then shown a set of twenty pairs of samples with one random pair at a time. Each pair had samples from two different models. The. human evaluator is asked to listen to the samples and had the option of choosing between the twc model or choosing not to prefer any of them. Hence, we have a quantification of preference between. every pair of models. We used the online tool made publicly available by Jillings et al.[(2015)..\nResults in Fig.3|clearly points out that SampleRNN (3-tier) is a winner by a huge margin in terms. of preference by human raters, then SampleRNN (2-tier) and afterward two other models, which matches with the performance comparison in Table|1\nThe same evaluation was conducted for Music dataset except for an additional filtering process of samples. Specific to only this dataset, we observed that a batch of generated samples from competing models (this time restricted to RNN, SampleRNN (2-tier), and SampleRNN (3-tier)) were either music-like or random noise. For all these models we only considered random samples that were not random noise. Fig.4|is dedicated to result of human evaluation on Music dataset.\n100 100 100 3-tier 3-tier 3-tier 80 80 80 60 60 60 prrrrrnee 40 40 40 20 20 20 2-tier RNN No-Pref. WaveN. No-Pref. No-Pref. 0 0 0 84.8 10.1 5.1 84.2 8.9 6.9 89.0 7.0 4.0 100 100 100 Prrreereaeee 2-tier 80 80 80 RNN 2-tier 60 60 60 prerrrreee 40 40 40 WaveN. WaveN. 20 RNN 20 20 No-Pref. No-Pref. No-Pref. : 0 0 79.0 18.0 3.0 60.2 32.0 7.8 22.4 63.3 14.3\nFigure 3: Pairwise comparison of 4 best models based on the votes from listeners conducted on samples generated from models trained on Blizzard dataset.\n100 100 100 3-tier 2-tier 80 80 80 60 2-tier 60 60 prerrreee 40 3-tier 40 40 20 20 20 No-Pref. No-Pref. No-Pref. RNN RNN 0 0 0 32.6 57.0 10.5 83.5 4.7 11.8 85.1 2.3 12.6\nFigure 4: Pairwise comparison of 3 best models based on the votes from listeners conducted o samples generated from models trained on Music dataset.\nFor the last experiment we are interested in measuring the memory span of the model. We trained. our model, SampleRNN (3-tier), with best hyper-parameters on a dataset of 2 speakers reading. audio books, one male and one female, respectively, with mean fundamental frequency of 125.3. and 201.8Hz. Each speaker has roughly 10 hours of audio in the dataset that has been preprocessed. similar to Blizzard. We observed that it learned to stay consistent generating samples from the same. speaker without having any knowledge about the speaker ID or any other conditioning information This effect is more apparent here in comparison to the unbalanced Onomatopoeia that sometimes. mixes two different categories of sounds..\nAnother experiment was conducted to test the effect of memory and study the effective memory. horizon. We inject 1 second of silence in the middle of sampling procedure in order to see if it will remember to generate from the same speaker or not. Initially when sampling we let the model generate 2 seconds of audio as it normally do. From 2 to 3 seconds instead of feeding back the. generated sample at that timestep a silent token (zero amplitude) would be fed. From 3 to 5 seconds again we sample normally; feeding back the generated token..\nWe did classification based on mean fundamental frequency of speakers for the first and last 2 seconds. In 83% of samples SampleRNN generated from the same person in two separate segments.\nThis is in contrast to a model with fixed past window like WaveNet where injecting 16000 silent. tokens (3.3 times the receptive field size) is equivalent to generating from scratch which has 50%. chance (assuming each 2-second segment is coherent and not a mixed sound of two speakers)."}, {"section_index": "5", "section_name": "4 RELATED WORK", "section_text": "Our work is related to earlier work on auto-regressive multi-layer neural networks, starting with Bengio & Bengio(1999), then NADE (Larochelle & Murray2011) and more recently Pix elRNN (van den Oord et al.||2016). Similar to how they tractably model joint distribution over units of the data (e.g. words in sentences, pixels in images, etc.) through an auto-regressive decomposi tion, we transform the joint distribution of acoustic samples using Eq.1\nThe idea of having part of the model running at different clock rates is related to multi-scale RNNs (Schmidhuber 992 El Hihi & Bengio 1995 Koutnik et al. 2014 Sordoni et al. 2015 Serban et al.2016).\nOur work is closely related to WaveNet (Oord et al.[2016), which is why we have made the above comparisons, and makes it interesting to compare the effect of adding higher-level RNN stages. working at a low resolution. Similar to this work, our models generate one acoustic sample at a time. conditioned on all previously generated samples. We also share the preprocessing step of quantizing. the acoustics into bins. Unlike this model, we have different modules in our models running at. different clock-rates. In contrast to WaveNets, we mitigate the problem of long-term dependency. with hierarchical structure and using stateful RNNs, i.e. we will always propagate hidden states to. the next training sequence although the gradient of the loss will not take into account the samples in. previous training sequence.\nWe propose a novel model that can address unconditional audio generation in the raw acoustic domain, which typically has been done until recently with hand-crafted features. We are able to show that a hierarchy of time scales and frequent updates will help to overcome the problem of modeling extremely high-resolution temporal data. That allows us, for this particular application, to learn the data manifold directly from audio samples. We show that this model can generalize well and generate samples on three datasets that are different in nature. We also show that the samples generated by this model are preferred by human raters."}, {"section_index": "6", "section_name": "ACKNOWLEDGMENTS", "section_text": "The authors would like to thank Joao Felipe Santos and Kyle Kastner for insightful comments and discussion. We would like to thank the Theano Development Team(2016and MILA staff. We acknowledge the support of the following agencies for research funding and computing support:. NSERC, Calcul Quebec, Compute Canada, the Canada Research Chairs and CIFAR. Jose Sotelc also thanks the Consejo Nacional de Ciencia y Tecnologia (CONACyT) as well as the Secretaria de Educacion Publica (SEP) for their support. This work was a collaboration with Ubisoft..\n'http://deeplearning.net/software/theano\nSuccess in this application, with a general-purpose solution as proposed here, opens up room for more improvement when specific domain knowledge is applied. This method, however, proposed with audio generation application in mind, can easily be adapted to other tasks that require learning the representation of sequential data with high temporal resolution and long-range complex struc ture."}, {"section_index": "7", "section_name": "REFERENCES", "section_text": "ames Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. Journal c Machine Learning Research, 13(Feb):281-305, 2012\nAlexander Bertrand, Kris Demuynck, Veronique Stouten, et al. Unsupervised learning of auditory filter banks using non-negative matrix factorisation. In 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 4713-4716. IEEE, 2008.\nJunyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation oj gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014\nJunyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Ben gio. A recurrent latent variable model for sequential data. In Advances in neural information. processing systems, pp. 2980-2988, 2015.\nAlexey Dosovitskiy, Jost Springenberg, Maxim Tatarchenko, and Thomas Brox. Learning to gener ate chairs, tables and cars with convolutional networks. 2016\nSalah El Hihi and Yoshua Bengio. Hierarchical recurrent neural networks for long-term dependen cies. In NIPS, volume 400, pp. 409. Citeseer, 1995.\nSepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735-1780, 1997.\nJan Koutnik, Klaus Greff, Faustino Gomez, and Juergen Schmidhuber. A clockwork rnn. arXiv preprint arXiv:1402.3511, 2014.\nHugo Larochelle and Iain Murray. The neural autoregressive distribution estimator. In AISTATS volume 1, pp. 2, 2011.\nHonglak Lee, Peter Pham, Yan Largman, and Andrew Y Ng. Unsupervised feature learning fo audio classification using convolutional deep belief networks. In Advances in neural informatior processing systems, pp. 1096-1104, 2009\nAaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model fo. raw audio. arXiv preprint arXiv:1609.03499, 2016.\nYoshua Bengio and Samy Bengio. Modeling high-dimensional discrete data with multi-layer neura networks. In NIPS, volume 99, pp. 400-406, 1999.\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprini arXiv:1412.6980. 2014\nKishore Prahallad, Anandaswarup Vadapalli, Naresh Elluru, G Mantena, B Pulugundla P Bhaskararao, HA Murthy, S King, V Karaiskos, and AW Black. The blizzard challenge 2013- indian language task. In Blizzard Challenge Workshop 2013, 2013..\nTim Salimans and Diederik P Kingma. Weight normalization: A simple reparameterization to ac celerate training of deep neural networks. arXiv preprint arXiv:1602.07868, 2016\nAlessandro Sordoni, Yoshua Bengio, Hossein Vahabi, Christina Lioma, Jakob Grue Simonsen, anc Jian-Yun Nie. A hierarchical recurrent encoder-decoder for generative context-aware query sug. gestion. In Proceedings of the 24th ACM International on Conference on Information and Knowl. edge Management, pp. 553-562. ACM, 2015.\nAaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks\nWojciech Zaremba. An empirical exploration of recurrent network architectures. 2015"}, {"section_index": "8", "section_name": "APPENDIX A", "section_text": "SampleRNN-WaveNet model has two modules operating at two different clock-rate. The slowe. clock-rate module (frame-level module) sees one frame (each of which has size FS) at a time whil the faster clock-rate component(sample-level component) sees one acoustic sample at a time i.e. th. ratio of clock-rates for these two modules would be the size of a single frame. Number of sequentia. steps for frame-level component would be FS times lower. We repeat the output of each step o frame-level component FS times so that number of time-steps for output of both the component match. The output of both these modules are concatenated for every time-step which is furthe operated by non-linearities for every time-step independently before generating the final output..\nIn our experiments, we kept size of a single frame (FS) to be 128. We tried two variants of this model: 1. fully convolutional WaveNet and 2. RNN-WaveNet. In fully convolutional WaveNet. both modules described above are implemented using dilated convolutions as described in original WaveNet model. In RNN-WaveNet, we use high capacity RNN in the frame-level module to model. the dependency between frames. The sample-level WaveNet in RNN-WaveNet has receptive field of size 509 samples from the past.\nJurgen Schmidhuber. Learning complex, extended sequences using the principle of history com pression. Neural Computation, 4(2):234-242, 1992.\nAlthough these models are designed with the intention of combining the two models to harness their best features, preliminary experiments show that this variant is not meeting our expectations at the. moment which directs us to a possible future work.."}] |
r1rz6U5lg | [{"section_index": "0", "section_name": "ABSTRACT", "section_text": "Code super-optimization is the task of transforming any given program to a more. efficient version while preserving its input-output behaviour. In some sense, it. is similar to the paraphrase problem from natural language processing where the. intention is to change the syntax of an utterance without changing its semantics. Code-optimization has been the subject of years of research that has resulted in. the development of rule-based transformation strategies that are used by compil ers. More recently, however, a class of stochastic search based methods have. been shown to outperform these strategies. This approach involves repeated sam. pling of modifications to the program from a proposal distribution, which are ac-. cepted or rejected based on whether they preserve correctness and the improve. ment they achieve. These methods, however, neither learn from past behaviour. nor do they try to leverage the semantics of the program under consideration. Mo-. tivated by this observation, we present a novel learning based approach for code. super-optimization. Intuitively, our method works by learning the proposal dis-. tribution using unbiased estimators of the gradient of the expected improvement. Experiments on benchmarks comprising of automatically generated as well as ex-. isting (Hacker's Delight') programs show that the proposed method is able to. significantly outperform state of the art approaches for code super-optimization.."}, {"section_index": "1", "section_name": "INTRODUCTION", "section_text": "Considering the importance of computing to human society, it is not surprising that a very larg body of research has gone into the study of the syntax and semantics of programs and programmin languages. Code super-optimization is an extremely important problem in this context. Given a pro gram or a snippet of source-code, super-optimization is the task of transforming it to a version tha has the same input-output behaviour but can be executed on a target compute architecture more effi ciently. Superoptimization provides a natural benchmark for evaluating representations of programs As a task, it requires the decoupling of the semantics of the program from its superfluous properties the exact implementation. In some sense, it is the natural analogue of the paraphrase problem ir natural language processing where we want to change syntax without changing semantics.\nDecades of research has been done on the problem of code optimization resulting in the developmen of sophisticated rule-based transformation strategies that are used in compilers to allow them t perform code optimization. While modern compilers implement a large set of rewrite rules and ar able to achieve impressive speed-ups, they fail to offer any guarantee of optimality, thus leaving room for further improvement. An alternative approach is to search over the space of all possibl programs that are equivalent to the compiler output, and select the one that is the most efficient. I the search is carried out in a brute-force manner, we are guaranteed to achieve super-optimization However, this approach quickly becomes computationally infeasible as the number of instructions and the length of the program grows.\nIn order to efficiently perform super-optimization, recent approaches have started to use a stochas tic search procedure, inspired by Markov Chain Monte Carlo (MCMC) sampling (Schkufza et al. 2013). Briefly, the search starts at an initial program, such as the compiler output. It iteratively sug gests modifications to the program, where the probability of a modification is encoded in a proposa distribution. The modification is either accepted or rejected with a probability that is dependent on the improvement achieved. Under certain conditions on the proposal distribution, the above proce dure can be shown, in the limit, to sample from a distribution over programs, where the probability of a program is related to its quality. In other words, the more efficient a program, the more times i1 is encountered, thereby enabling super-optimization. Using this approach, high-quality implemen tations of real programs such as the Montgomery multiplication kernel from the OpenSsL library were discovered. These implementations outperformed the output of the gcc compiler and even expert-handwritten assembly code.\nOne of the main factors that governs the efficiency of the above stochastic search is the choice of the proposal distribution. Surprisingly, the state of the art method, Stoke (Schkufza et al., 2013) employs a proposal distribution that is neither learnt from past behaviour nor does it depend on the. syntax or semantics of the program under consideration. We argue that this choice fails to fully exploit the power of stochastic search. For example, consider the case where we are interested ir performing bitwise operations, as indicated by the compiler output. In this case, it is more likely that the optimal program will contain bitshifts than floating point opcodes. Yet, Stoke will assign ar. equal probability of use to both types of opcodes..\nIn order to alleviate the aforementioned deficiency of Stoke, we build a reinforcement learning framework to estimate the proposal distribution for optimizing the source code under consideration. The score of the distribution is measured as the expected quality of the program obtained via stochas-. tic search. Using training data, which consists of a set of input programs, the parameters are learnt via the REINFORCE algorithm (Williams, 1992). We demonstrate the efficacy of our approach on two datasets. The first is composed of programs from \"Hacker's Delight' (Warren, 2002). Due to the limited diversity of the training samples, we show that it is possible to learn a prior distribution (un conditioned on the input program) that outperforms the state of the art. The second dataset contains automatically generated programs that introduce diversity in the training samples. We show that, in this more challenging setting, we can learn a conditional distribution given the initial program that. significantly outperforms Stoke.\nSuper-optimization The earliest approaches for super-optimization relied on brute-force search By sequentially enumerating all programs in increasing length orders (Granlund & Kenner, 1992 Massalin, 1987), the shortest program meeting the specification is guaranteed to be found. As ex pected, this approach scales poorly to longer programs or to large instruction sets. The longest reported synthesized program was 12 instructions long, on a restricted instruction set (Massalin, 1987).\nTrading off completeness for efficiency, stochastic methods (Schkufza et al., 2013) reduced the number of programs to test by guiding the exploration of the space, using the observed quality of programs encountered as hints. In order to improve the size of solvable instances, Phothilimthana. et al. (2016) combined stochastic optimizers with smart enumerative solvers. However, the reliance of stochastic methods on a generic unspecific exploratory policy made the optimization blind to the. problem at hand. We propose to tackle this problem by learning the proposal distribution..\nNeural Computing Similar work was done in the restricted case of finding efficient implemen. tation of computation of value of degree k polynomials (Zaremba et al., 2014). Programs were. generated from a grammar, using a learnt policy to prioritise exploration. This particular approach of guided search looks promising to us, and is in spirit similar to our proposal, although applied on. a very restricted case.\nAnother approach to guide the exploration of the space of programs was to make use of the gradients of differentiable relaxation of programs. Bunel et al. (2016) attempted this by simulating program. execution using Recurrent Neural Networks. However, this provided no guarantee that the network parameters were going to correspond to real programs. Additionally, this method only had the.\npossibility of performing local, greedy moves, limiting the scope of possible transformations. Or the contrary, our proposed approach operates directly on actual programs and is capable of accepting short-term detrimental moves.\nLearning to optimize Outside of program optimization, applying learning algorithms to improve optimization procedures, either in terms of results achieved or runtime, is a well studied subject Doppa et al. (2014) proposed imitation learning based methods to deal with structured output spaces in a \"Learning to search' framework. While this is similar in spirit to stochastic search, our setting differs in the crucial aspect of having a valid cost function instead of searching for one.\nMore relevant is the recent literature on learning to optimize. Li & Malik (2016) and Andrychowicz et al. (2016) learn how to improve on first-order gradient descent algorithms, making use of neural networks. Our work is similar, as we aim to improve the optimization process. However, as opposed to the gradient descent that they learn on a continuous unconstrained space, our initial algorithm is an MCMC sampler on a discrete domain.\nSimilarly, training a proposal distribution parameterized by a Neural Network was also proposed by Paige & Wood (2016) to accelerate inference in graphical models. Similar approaches were successfully employed in computer vision problems where data driven proposals allowed to make inference feasible (Jampani et al., 2015; Kulkarni et al., 2015; Zhu et al., 2000). Other approaches to speeding up MCMC inference include the work of Salimans et al. (2015), combining it with Variational inference.\nStoke (Schkufza et al., 2013) performs black-box optimization of a cost function on the space o programs, represented as a series of instructions. Each instruction is composed of an opcode, speci. fying what to execute, and some operands, specifying the corresponding registers. Each given inpu program T defines a cost function. For a candidate program R called rewrite, the goal is to optimize. the following cost function:\ncost(R,T) = we X eq(R,T) + wp perf(R)\nThe term eq(R; T) measures how well the outputs of the rewrite match the outputs of the reference. program. This can be obtained either exactly by running a symbolic validator or approximately by. running test cases. The term perf(R) is a measure of the efficiency of the program. In this paper we consider runtime to be the measure of this efficiency. It can be approximated by the sum of the latency of all the instructions in the program. Alternatively, runtime of the program on some test. cases can be used.\nTo find the optimum of this cost function, Stoke runs an MCMC sampler using the Metropo lis (Metropolis et al., 1953) algorithm. This allows us to sample from the probability distribution induced by the cost function:\nR' ~ q(:[R)\nWhile the above procedure is only guaranteed to sample from the distribution p( : ; T) in the limi if the proposal distribution q is symmetric (q(R'|R) = q(R|R') for all R, R'), it still allows u.\n1 p(R;T) = exp(-cost (R,T))) Z\np(R';T) a(R,T) = min R:T\nis then used as the parameter of a Bernoulli distribution from which an accept/reject decision is sampled. If the move is accepted, the state of the optimizer is updated to R'. Otherwise, it remains in R.\nto perform efficient hill-climbing for non-symmetric proposal distributions. Moves leading to ar improvement are always going to be accepted, while detrimental moves can still be accepted ir. order to avoid getting stuck in local minima.."}, {"section_index": "2", "section_name": "3.2 LEARNING TO SEARCH", "section_text": "We now describe our approach to improve stochastic search by learning the proposal distribution. We begin our description by defining the learning objective (section 3.2.1), followed by a parameter. ization of the proposal distribution (section 3.2.2), and finally the reinforcement learning frameworl to estimate the parameters of the proposal distribution (section 3.2.3).."}, {"section_index": "3", "section_name": "3.2.1 OBJECTIVE FUNCTION", "section_text": "Our goal is to optimize the cost function defined in equation (1). Given a fixed computational budget of T iterations to perform program super-optimization, we want to make moves that lead us. to the lowest possible cost. As different programs have different runtimes and therefore different. associated costs, we need to perform normalization. As normalized loss function, we use the ratio between the best rewrite found and the cost of the initial unoptimized program Ro. Formally, the loss for a set of rewrites {Rt }t=o..T is defined as follows:.\nmint=0..T cost (Rt, T) r{Rt}t=0..T cost (Ro,T)\nThe proposal distribution (3) originally used in Stoke (Schkufza et al., 2013) takes the form of a hierarchical model. The type of the move is initially sampled from a probability distribution Additional samples are drawn to specify, for example, the affected location in the programs ,the new operands or opcode to use. Which of these probability distribution get sampled depends on the type of move that was first sampled. The detailed structure of the proposal probability distribution can be found in Appendix B.\nStoke uses uniform distributions for each of the elementary probability distributions the model sam. ples from. This corresponds to a specific instantiation of the general stochastic search paradigm In this work, we propose to learn those probability distributions so as to maximize the probability. of reaching the best programs. The rest of the optimization scheme remains similar to the one of. Schkufza et al. (2013).\nOur chosen parameterization of q is to keep the hierarchical structure of the original work of Schkufza et al. (2013), as detailed in Appendix B, and parameterize all the elementary proba bility distributions (over the positions in the programs, the instructions to propose or the arguments) independently. The set 0 of parameters for qe will thus contain a set of parameters for each ele- mentary probability distributions. A fixed proposal distribution is kept through the optimization of a given program, so the proposal distribution needs to be evaluated only once, at the beginning of the optimization and not at every iteration of MCMC."}, {"section_index": "4", "section_name": "3.2.3 LEARNING THE PROPOSAL DISTRIBUTION", "section_text": "Recall that our goal is to learn a proposal distribution. Given that our optimization procedure is. stochastic, we will need to consider the expected cost as our loss. This expected loss is a function of the parameters 0 of our parametric proposal distribution qe:.\nL(O) = E{Rt}~qe [r({Rt}t=0.T)]\nThe stochastic computation graph corresponding to a run of the Metropolis algorithm is given in Figure 1. We have assumed the operation of evaluating the cost of a program to be a deterministic function, as we will not model the randomness of measuring performance\nIn order to learn the proposal distribution, we will use stochastic gradient descent on our loss func. tion (6). We obtain the first order derivatives with regards to our proposal distribution parame. ters using the REINFORCE (Williams, 1992) estimator, also known as the likelihood ratio estima tor (Glynn, 1990) or the score function estimator (Fu, 2006). This estimator relies on a rewriting of.\nVe)j f(x;0)r(x)=>r(x)Vef(x;0)=>* f(x;0)r(x)Ve log(f(x;0))\nand provides an unbiased estimate of the gradient\nFigure 1: Stochastic computation graph of the Metropolis algorithm used for program super optimization. Round nodes are stochastic nodes and square ones are deterministic. Red arrow corresponds to computation done in the forward pass that needs to be learned while green arrow correspond to the backward pass. Full arrows represent deterministic computation and dashed ar rows represent stochastic ones. The different steps of the forward pass are: (a) Based on features of the reference program, the proposal distribution q is computed. (b) A random move is sampled from the proposal distribution. (c) The score of the proposed rewrite is experimentally measured. (d) The acceptance criterion (4) for the move is computed. (e) The move is accepted with a probability equal to the acceptance criterion. (f) The cost is observed, corresponding to the best program obtained during the search. (g) Moves b to f are repeated T times.\nA helpful way to derive the gradients is to consider the execution traces of the search procedure under the formalism of stochastic computation graphs (Schulman et al., 2015). We introduce one \"cost node\" in the computation graphs at the end of each iteration of the sampler. The associated cost corresponds to the normalized difference between the best rewrite so far and the current rewrite after this step:\ncost (Rt, T) - mini=0.t-1 cost (Ri, T) Ct = min cost (Ro,T)\nCslnnlalolIs. Ve>`f(x;0)r(x)=>`r(x)Vef(x;0)=)f(x;0)r(x)Velog(f(x;0)) es an unbiased estimate of the gradient.. Cost (f) New rewrite Bernoulli (e) ! Acceptance criterion (d) Candidate score Score (c) (g) Candidate Rewrite Program Move Categorical Sample (b) : i REINFORCE Proposal Distribution Neural Network (a) BackPropagation Feature of original program\nFor each round of MCMC, the gradient with regards to the proposal distribution is computed using the REINFORCE estimator which is equal to\nAs our proposal distribution remains fixed for the duration of a program optimization, these gradients needs to be summed over all the iterations to obtain the total contribution to the proposal distribution. Once this gradient is estimated, it becomes possible to run standard back-propagation with regards to the features on which the proposal distribution is based on, so as to learn the appropriate feature representation."}, {"section_index": "5", "section_name": "4.1 SETUP", "section_text": "Implementation Our system is built on top of the Stoke super-optimizer from Schkufza et al (2013). We instrumented the implementation of the Metropolis algorithm to allow sampling fron parameterized proposal distributions instead of the uniform distributions previously used. Becaus the proposal distribution is only evaluated once per program optimisation, the impact on the opti mization throughput is low, as indicated in Table 3.\nOur implementation also keeps track of the traces through the stochastic graph. Using the traces generated during the optimization, we can compute the estimator of our gradients, implemented using the Torch framework (Collobert et al., 2011).\nDatasets We validate the feasibility of our learning approach on two experiments. The first is. based on the Hacker's delight (Warren, 2002) corpus, a collection of twenty five bit-manipulation programs, used as benchmark in program synthesis (Gulwani et al., 2011; Jha et al., 2010; Schkufza. et al., 2013). Those are short programs, all performing similar types of tasks. Some examples. include identifying whether an integer is a power of two from its binary representation, counting the. number of bits turned on in a register or computing the maximum of two integers. An exhaustive. description of the tasks is given in Appendix C. Our second corpus of programs is automatically. generated and is more diverse.\nModels The models we are learning are a set of simple elementary probabilities for the categorical. distribution over the instructions and over the type of moves to perform. We learn the parameters of each separate distribution jointly, using a Softmax transformation to enforce that they are proper. probability distributions. For the types of move where opcodes are chosen from a specific subset, the probabilities of each instruction are appropriately renormalized. We learn two different type of models and compare them with the baseline of uniform proposal distributions equivalent to Stoke.\nOur first model, henceforth denoted the bias, is not conditioned on any property of the programs to. optimize. By learning this simple proposal distribution, it is only possible to capture a bias in the. dataset. This can be understood as an optimal proposal distribution that Stoke should default to.\nThe second model is a Multi Layer Perceptron (MLP), conditioned on the input program to optimize. For each input program, we generate a Bag-of-Words representation based on the opcodes of the. program. This is embedded through a three hidden layer MLP with ReLU activation unit. The proposal distribution over the instructions and over the type of moves are each the result of passing the outputs of this embedding through a linear transformation, followed by a SoftMax.\nThe optimization is performed by stochastic gradient descent, using the Adam (Kingma & Ba, 2015). optimizer. For each estimate of the gradient, we draw 100 samples for our estimator. The values of the hyperparameters used are given in Appendix A. The number of parameters of each model is. given in Table 1.\nThe sum of all the cost nodes corresponds to the sum of all the improvements made when a new lowest cost was achieved. It can be shown that up to a constant term, this is equivalent to our objective function (5). As opposed to considering only a final cost node at the end of the T iterations. this has the advantage that moves which were not responsible for the improvements would not get assigned any credit.\nVe,iL(0) =(Ve log qe(R;|Ri-1))> Ct : t>i\n2912 1.4 106\nTable 1: Size of the different models compared. Uniform corresponds to Stoke Schkufza et al. (2013).\nIn order to have a larger corpus than the twenty-five programs initially present in \"Hacker's De light\"', we generate various starting points for each optimization. This is accomplished by running Stoke with a cost function where wp = 0 in (1), and keeping only the correct programs. Duplicate programs are filtered out. This allows us to create a larger dataset from which to learn. Examples of these programs at different level of optimization can be found in Appendix D.\nWe divide this augmented Hacker's Delight dataset into two sets. All the programs corresponding. to even-numbered tasks are assigned to the first set, which we use for training. The programs corre- sponding to odd-numbered tasks are kept for separate evaluation, so as to evaluate the generalisation. of our learnt proposal distribution..\nThe optimization process is visible in Figure 2, which shows a clear decrease of the training loss and testing loss for both models. While simply using stochastic super-optimization allows to dis- cover programs 40% more efficient on average, using a tuned proposal distribution yield even larger improvements, bringing the improvements up to 60%, as can be seen in Table2. Due to the similar- ity between the different tasks, conditioning on the program features does not bring any significant improvements.\n0.6 Training Training Testing Testing 0.5 0.3 0.3 0.2 0.1 0.1 20 40 60 80 100 20 40 60 80 100 Nb epochs Nb epochs\n0.6 Training Training Testing Testing 0.5 0.5 0.4 0.3 ION 0.2 0.1 0. 20 40 60 80 100 20 40 60 80 100 Nb epochs Nb epochs (a) Bias (b) Multi-layer Perceptron\nFigure 2: Proposal distribution training. All models learn to improve the performance of the stochas tic optimization. Because the tasks are different between the training and testing dataset, the values. between datasets can't directly be compared as some tasks have more opportunity for optimization.. It can however be noted that improvements on the training dataset generalise to the unseen tasks.\nIn addition, to clearly demonstrate the practical consequences of our learning, we present in Figure 3. a superposition of score traces, sampled from the optimization of a program of the test set. Figure 3a corresponds to our initialisation, an uniform distribution as was used in the work of Schkufza et al (2013). Figure 3d corresponds to our optimized version. It can be observed that, while the unifor proposal distribution was successfully decreasing the cost of the program, our learnt proposal distri. bution manages to achieve lower scores in a more robust manner and in less iterations. Even using only 100 iterations (Figure 3e), the learned model outperforms the uniform proposal distributior. with 400 iterations (Figure 3c).\nModel Training Test Uniform 57.01% 53.71% Bias 36.45 % 31.82 % MLP 35.96 % 31.51 %\nTable 2: Final average relative score on the Hacker's Delight. benchmark. While all models improve with regards to the ini-. tial proposal distribution based on uniform sampling, the model. conditioning on program features reach better performances\nFigure 3: Distribution of the improvement achieved when optimising a training sample from the. Hacker's Delight dataset. The first column represent the evolution of the score during the optimiza. tion. The other columns represent the distribution of scores after a given number of iterations... (a) to (c) correspond to the uniform proposal distribution, (d) to (f) correspond to the learned bias"}, {"section_index": "6", "section_name": "4.3 AUTOMATICALLY GENERATED PROGRAMS", "section_text": "To evaluate our performance on a more challenging problem, we automatically synthesize a larger. dataset of programs. Our methods to do so consists in running Stoke repeatedly with a constant cost function, for a large number of iterations. This leads to a fully random walk as every proposed programs will have the same cost, leading to a 50% chance of acceptance. We generate 600 of these programs, 300 that we use as a training set for the optimizer to learn over and 300 that we keep as a. test set.\nThe performance achieved on this more complex dataset is shown in Figure 4 and Table 4\nTraining Training Testing Testing 0.9 0.6 0.6 0.5 20 40 50 60 0.5 0 10 30 70 10 20 30 40 50 Nb epochs Nb epochs (a) Bias (b) Multi-layer Perceptron\nTraining Training Testing Testing 0. 0.6 0.6 0.5 10 20 30 40 50 60 70 0 10 20 30 40 50 Nb epochs Nb epoch\nFigure 4: Training of the proposal distribution on the automatically generated benchmark\n100 150 200 Number of iterations Score achieved Score achieved (a) With Uniform proposal. (b) Scores after 200 iterations. (c) Scores after 400 iterations Optimization Traces 50 100 150 200 Number of iterations 0.10.20.30.40.50.60.70.80.9 0.10.20.30.40.50.60.70.80.9 Score achieved (d)With Learned Bias. (e) Scores after 100 iterations (f) Scores after 200 iterations Optimization Traces\nWhile the previous experiments shows promising results on a set of programs of interest, the limited. diversity of programs might have made the task too simple, as evidenced by the good performance of a blind model. Indeed, despite the data augmentation, only 25 different tasks were present, all. variations of the same programs task having the same optimum..\nTable 3: Throughput of the pro- posal distribution estimated by timing MCMC for 10000 iterations"}, {"section_index": "7", "section_name": "5 CONCLUSION", "section_text": "Within this paper, we have formulated the problem of optimizing the performance of a stochas- tic super-optimizer as a Machine Learning problem. We demonstrated that learning the proposal distribution of a MCMC sampler was feasible and lead to faster and higher quality improvements Our approach is not limited to stochastic superoptimization and could be applied to other stochastic search problems.\nSeveral improvements are possible to the presented methods. In mature domains such as Com- puter Vision, the representations of objects of interests have been widely studied and as a result are successful at capturing the information of each sample. In the domains of programs, obtaining in-. formative representations remains a challenge. Our proposed approach ignores part of the structure. of the program, notably temporal, due to the limited amount of existing data. The synthetic data. having no structure, it wouldn't be suitable to learn those representations from it. Gathering a largei. dataset of frequently used programs so as to measure more accurately the practical performance of. those methods seems the evident next step for the task of program synthesis..\nModel Training Test Uniform 76.63% 78.15 % Bias 61.81% 63.56% MLP 60.13% 62.27%\nTable 4: Final average relative score. The MLP con-. ditioning on the features of the program perform bet-. ter than the simple bias. Even the unconditioned bias performs significantly better than the Uniform proposal. distribution.\nIt is interesting to compare our method to the synthesis-style approaches that have been appearing recently in the Deep Learning community (Graves et al., 2014) that aim at learning algorithms directly using differentiable representations of programs. We find that the stochastic search-based approach yields a significant advantage compared to those types of approaches, as the resulting program can be run independently from the Neural Network that was used to discover them."}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "Rudy Bunel, Alban Desmaison, Pushmeet Kohli, Philip HS Torr, and M Pawan Kumar. Adaptive neural com pilation. In NIPS. 2016.\nRonan Collobert, Koray Kavukcuoglu, and Clement Farabet. Torch7: A matlab-like environment for machine learning. In NIPS, 2011.\nJanardhan Rao Doppa, Alan Fern, and Prasad Tadepalli. Hc-search: A learning framework for search-basec structured prediction. JAIR, 2014.\nMichael C. Fu. Gradient estimation. Handbooks in Operations Research and Management Science. 2006\nTorbjorn Granlund and Richard Kenner. Eliminating branches using a superoptimizer and the GNU C compiler ACM SIGPLAN Notices, 1992\nAlex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. CoRR, 2014\nSumit Gulwani, Susmit Jha, Ashish Tiwari, and Ramarathnam Venkatesan. Synthesis of loop-free programs\nDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015\nTejas D Kulkarni, Pushmeet Kohli, Joshua B Tenenbaum, and Vikash Mansinghka. Picture: A probabilistic programming language for scene perception. In CVPR, 2015.\nKe Li and Jitendra Malik. Learning to optimize. CoRR. 2016\nHenry Massalin. Superoptimizer: A look at the smallest program. In ACM SIGPLAN Notices, 1987\nNicholas Metropolis, Arianna W Rosenbluth, Marshall N Rosenbluth, Augusta H Teller, and Edward Teller. Equation of state calculations by fast computing machines. The journal of chemical physics, 1953.\nBrookes Paige and Frank Wood. Inference networks for sequential Monte Carlo in graphical models. In ICML 2016.\nTim Salimans, Diederik P Kingma, Max Welling, et al. Markov chain monte carlo and variational inference Bridging the gap. In ICML, 2015\nEric Schkufza, Rahul Sharma, and Alex Aiken. Stochastic superoptimization. SIGPLAN, 2013\nHenry S Warren. Hacker's delight. 2002\nRonald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning Machine learning, 1992.\nSong-Chun Zhu, Rong Zhang, and Zhuowen Tu. Integrating bottom-up/top-down for object recognition by data driven markov chain monte carlo. In CVPR. 2000.\nMarcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, and Nando de Freitas. Learning to learn by gradient descent by gradient descent. In NIPS, 2016.\nBerkeley Churchill, Eric Schkufza, and Stefan Heule. Stoke. https://github.com/StanfordPL stoke, 2016.\nPhitchaya Mangpo Phothilimthana, Aditya Thakur, Rastislav Bodik, and Dinakar Dhurjati. Scaling up super optimization. In ACM SIGPLAN Notices, 2016.\nJohn Schulman, Nicolas Heess, Theophane Weber, and Pieter Abbeel. Gradient estimation using stochastic computation graphs. In NIPS, 2015."}, {"section_index": "9", "section_name": "A.1 ARCHITECTURES", "section_text": "The output size of 9 corresponds to the types of move. The output size of 2903 correspond to the number of possible instructions that Stoke can use during a rewrite. This is smaller that the 3874. that are possible to find in an original program..\nTable 5: Architecture of the Bias"}, {"section_index": "10", "section_name": "A.2 TRAINING PARAMETERS", "section_text": "All of our models are trained using the Adam (Kingma & Ba, 2015) optimizer, with its default hyper-parameters 1 = 0.9, 2 = 0.999, e = 10-8. We use minibatches of size 32.\nThe learning rate were tuned by observing the evolution of the loss on the training datasets for the first iterations. The picked values are given in Table 7. Those learning rates are divided by the size of the minibatches.\nTable 7: Values of the Learning rate used"}, {"section_index": "11", "section_name": "B STRUCTURE OF THE PROPOSAL DISTRIBUTION", "section_text": "The sampling process of a move is a hierarchy of sampling step. The easiest way to represent it is as. a generative model for the program transformations. Depending on what type of move is sampled. different series of sampling steps have to be performed. For a given move, all the probabilities are. sampled independently so the probability of proposing the move is the product of the probability ol picking each of the sampling steps. The generative model is defined in Figure 5. It is going to be. parameterized by the the parameters of each specific probability distribution it samples from. The default Stoke version uses uniform probabilities over all of those elementary distributions..\nTable 6: Architecture of the Multi Layer Perceptron\ndef proposal(current_program): move_type = sample(categorical(all_move_type)) if move_type =- 1: % Add empty Instruction pos = sample(categorical(all_positions(current_program))) return (ADD_NOP, pos) if move_type -- 2: % Delete an Instruction pos = sample(categorical(all_positions(current_program))) return (DELETE, pos) if move_type =- 3: % Instruction Transform pos = sample(categorical(all_positions(current_program))) instr = sample(categorical(set_of_all_instructions)) arity = nb_args(instr) for i = 1, arity: possible_args = possible_arguments(instr, i) % get one of the arguments that can be used as i-th % argument for the instruction 'instr' operands[i] = sample(categorical(possible_args)) return (TRANSFORM, pos, instr, operands) if move_type == 4: % Opcode Transform pos = sample(categorical(all_positions(current_program))) args = arguments_at(current_program, pos) instr = sample(categorical(possible_instruction(args))) % get an instruction compatible with the arguments % that are in the program at line pos. return(OPCODE_TRANSFORM, poS, instr) if move_type -- 5: % Opcode Width Transform pos = sample(categorical(all_positions(current_program)) curr_instr = instruction_at(current_program, pos) instr = sample(categorical(same_memonic_instr(curr_instr)) % get one instruction with the same memonic that the % instruction ' curr_instr'. return (OPCODE_TRANSFORM, poS, instr) if move_type == 6: % Operand transform pos = sample(categorical(all_positions(current-program)) curr_instr = instruction_at(current_program, pos) arg_to_mod = sample(categorical(args(curr_instr))) possible_args = possible_arguments(curr_instr, arg_to_mod) new_operand = sample(categorical(possible_args)) return (OPERAND_TRANSFORM, pos, arg_to_mod, new_operand) if move_type =- 7: % Local swap transform block_idx = sample(categorical(all_blocks(current_program))) possible_pos = pos_in_block(current_program, block_idx) pos_1 = sample(categorical(possible_pos)) pos_2 = sample(categorical(possible_pos)) return (SwAP, pos_1, pos_2) if move_type =- 8: % Global swap transform pos_1 = sample(categorical(all_positions(current_program))) pos_2 = sample(categorical(all_positions(current_program))) return (SwAP, pos_1, pos_2) if move_type =- 9: % Rotate transform pos_1 = sample(categorical(all_positions(current_program))) pos_2 = sample(categorical(all_positions(current_program))) return (ROTATE, pos_1, pos_2)\nThe 25 tasks of the Hacker's delight Warren (2002) datasets are the following\n1. Turn off the right-most one bit 2. Test whether an unsigned integer is of the form 2(n 1) 3. Isolate the right-most one bit 4. Form a mask that identifies right-most one bit and trailing zeros 5. Right propagate right-most one bit 6. Turn on the right-most zero bit in a word 7. Isolate the right-most zero bit 8. Form a mask that identifies trailing zeros 9. Absolute value function 10. Test if the number of leading zeros of two words are the same 11. Test if the number of leading zeros of a word is strictly less than of anc 12. Test if the number of leading zeros of a word is less than of another wc 13. Sign Function 14. Floor of average of two integers without overflowing 15. Ceil of average of two integers without overflowing 16. Compute max of two integers 17. Turn off the right-most contiguous string of one bits 18. Determine if an integer is a power of two 19. Exchanging two fields of the same integer according to some input 20. Next higher unsigned number with same number of one bits 21. Cycling through 3 values 22. Compute parity 23. Counting number of bits 24. Round up to next highest power of two 25. Compute higher order half of product of x and y\nReference implementation of those programs were obtained from the examples directory of the stok repository (Churchill et al., 2016)."}, {"section_index": "12", "section_name": "EXAMPLES OF HACKER'S DELIGHT OPTIMISATION", "section_text": "The first task of the Hacker's Delight corpus consists in turning off the right-most one bit of a register.\nNote that such optimization are already feasible using the stoke system of Schkufza et al. (2013)\n1 pushq %rbp 2 movq %rsp, %rbp 3 movl %edi, -0x4(%rbp) 4 movl -0x4(%rbp), %edi 5 subl $Ox1, %edi 6 movl %edi, -0x8(%rbp) 7 movl -0x4(%rbp), %edi 8 and1 -Ox8(%rbp), %edi 1 #include <stdint.h> 9 movl %edi, %eax 2 10 popq %rbp 3 int32_t p01(int32_t x) f 11 retq 4 int32_t o1 = x - 1; 12 nop 5 return x & o1;. 13 nop 6 14 nop (a) Source. (b) Optimization starting point. 1 blsrl %edi, %esi 2 sets %ch 3 xorq %rax, %rax 4 sarb $Ox2, %ch 5 rorw $Ox1, %di 6 subb $Ox3, %dil 7 mul1 %ebp 8 subb %ch, %dh 9 rcrb $Ox1, %dil 10 cmovbel %esi , %eax 1 b1srl %edi, %eax 11 r etq 2 retq (c) Alternative equivalent program (d) Optimal solution..\nWhen compiling the code in Listing 6a, 11vm generates the code shown in Listing 6b. A typical. example of an equivalent version of the same program obtained by the data-augmentation procedure is shown in Listing 6c. Listing 6d contains the optimal version of this program.\nFigure 6: Program at different stage of the optimization."}] |
HkljfjFee | [{"section_index": "0", "section_name": "SUPPORT REGULARIZED SPARSE CODING AND ITS FAST ENCODER", "section_text": "Yingzhen Yang1,2, Jiahui Yu?, Pushmeet Kohli3, Jianchao Yang', Thomas S. Huang Snan Research"}, {"section_index": "1", "section_name": "ABSTRACT", "section_text": "Sparse coding represents a signal by a linear combination of only a few atoms of a learned over-complete dictionary. While sparse coding exhibits compelling. performance for various machine learning tasks, the process of obtaining sparse. code with fixed dictionary is independent for each data point without considering. the geometric information and manifold structure of the entire data. We propose Support Regularized Sparse Coding (SRSC) which produces sparse codes that. account for the manifold structure of the data by encouraging nearby data in the manifold to choose similar dictionary atoms. In this way, the obtained support regularized sparse codes capture the locally linear structure of the data manifold and enjoy robustness to data noise. We present the optimization algorithm of SRSC with theoretical guarantee for the optimization over the sparse codes. We also propose a feed-forward neural network termed Deep Support Regularized Sparse Coding (Deep-SRSC) as a fast encoder to approximate the sparse codes generated. by SRSC. Extensive experimental results demonstrate the effectiveness of SRSC and Deep-SRSC."}, {"section_index": "2", "section_name": "1 INTRODUCTION", "section_text": "The aim of sparse coding is to represent an input vector by a linear combination of a few atoms of learned dictionary which is usually over-complete, and the coefficients for the atoms are called spars code. Sparse coding is widely applied in machine learning and signal processing, and sparse code i extensively used as a discriminative and robust feature representation with convincing performanc for classification and clustering (Yang et al., 2009; Cheng et al., 2013; Zhang et al., 2013). Suppos the data X = [x1, X2,...,xn] E IRdn lie in the d-dimensional Euclidean space IRd, and th dictionary matrix is D = [D1, D2, ..., Dp] E IRdp with each Dk E IRd (k = 1, ...,p) being aI atom of the dictionary, sparse coding method seeks for the linear sparse representation with respec to the dictionary D for each vector x E X by solving the following convex optimization problem:\nmin l|xi-DZ |I2 +X||Z'|1 s.t. |D|2< co,k=1,..., D,Z\nwhere A is a weighting parameter for the l--norm of z, and co is a positive constant that bounds the l2-norm of each dictionary atom. In (Gregor & LeCun, 2010), a feed-forward neural network named Learned Iterative Shrinkage and Thresholding Algorithm (LISTA) is proposed to produce the approximation for sparse coding (1). The architecture of LISTA is illustrated in Figure 1. The LISTA network involves an finite number of stages wherein each stage performs the following operation on the intermediate sparse code:\n(k+1) = he(Wx+ Sz 0\nThis material is based upon work supported by the National Science F Oundatiol under Grant No. 1318971. Any opinions, findings, and conclusions ommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.\nFigure 1: Illustration of LISTA network for approximate sparse coding\nwhere he is an element-wise shrinkage function defined as\nhe(u)k =sign(ux)ux-0)+,k =1,...,P\ndescent and back-propagation. Inspired by LISTA, a series of previous works have designed neura networks to simulate different forms of linear coding and achieve end-to-end training for differen tasks such as image super-resolution (Liu et al., 2016) and hashing (Wang et al., 2016).\nSparse coding is widely used to model high-dimensional data. Based on the formulation of sparse coding (1), it can be observed that the sparse code of each data point is obtained independentl when the dictionary is fixed, which ignores the geometric information and manifold structure of the high-dimensional data. In order to obtain the sparse codes that account for the geometric informatior and manifold structure of the data, many regularized sparse coding methods, such as (Liu et al 2010; He et al., 2011; Zheng et al., 2011; Gao et al., 2013), employ manifold assumption (Belkir et al., 2006). Manifold assumption in these methods imposes local smoothness on the sparse codes of nearby data, namely nearby data are encouraged to have similar sparse codes in the sense oi e2-distance, and they are termed e2-Regularized Sparse Coding (l2-RSC). In this paper, we propose Support Regularized Sparse Coding (SRSC). Compared to l2-RSC, SRSC captures the locally linea structure of the data manifold by encouraging nearby data to share dictionary atoms. In addition SRSC enjoys robustness to data noise and preserves freedom in the spare representation of data without constraints on the magnitude of the sparse codes.\nThe remaining parts of the paper are organized as follows. SRsC and its optimization algorithm. together with e2-RSC are introduced in the next section. The theoretical properties of the optimization. of SRSC are shown in Section 3, with theoretical guarantee on the obtained sub-optimal solution for. each step of the coordinate descent for obtaining the support regularized sparse codes: convergence. to the critical point of the objective function and being close to the globally optimal solution. We then show the performance of the SRSC on data clustering, and conclude the paper. We use bold letters. for matrices and vectors, and regular lower letter for scalars throughout this paper. The bold letter. with superscript indicates the corresponding column of a matrix, and the bold letter with subscript. indicates the corresponding element of a matrix or vector. II : |F and |I : llp denote the Frobenius. norm and the lP-norm.\nIn this section, we introduce Support Regularized Sparse Coding (SRsC) which is designed tc capture the locally linear structure of the data manifold for sparse coding. One of the most importan properties of manifold is that it is locally Euclidean, and each data point in the manifold has a neighbourhood that is homeomorphic to a Euclidean space. The success of several manifold learning methods, including LLE (Roweis & Saul, 2000), SMCE (Elhamifar & Vidal, 2011) and Locall Linear Hashing (Irie et al., 2014), is built on exploiting the locally linear structure of manifold. Ir these methods, the locally linear structure associated with each data point is a linear representation o\nS X K S. X: M\nFigure 2: Illustration of capturing the locally linear structure of the data manifold by Support. Regularized Sparse Coding. Nearby data are encouraged to share dictionary atoms. In this example x, and x; choose three common dictionary atoms so they lie on or close to the local subspace Sj. spanned by the common atoms, and it is the similar case for x and xg with local subspace S2. Due. to the smoothness of the support of the sparse codes, neighboring local subspaces, such as S1 and. S2, can share dictionary atoms. In this example, the two local subspaces share two dictionary atoms. marked in red.\nthat point by a set of its nearest neighbors in a nonparametric manner, from which the low-dimensional embedding complying to the manifold structure of the original data is obtained and used for various learning tasks. In the context of sparse coding, the data lie on or close to the subspaces spanned by. the dictionary atoms specified by the nonzero elements of the corresponding sparse codes. Inspired. by this observation, we propose to capture the locally linear structure of the data manifold for sparse. coding by encouraging nearby data to share the atoms of the dictionary, so that nearby data are on or close to the local subspace spanned by the common dictionary atoms (see Figure 2)..\nA is usually the adjacency matrix of K-Nearest-Neighbor (KNN) graph, i.e. A; = 1 if and only if x, is among the K nearest neighbors of x; or x, is among the K nearest neighbors of x. Note that KNN is extensively used in the manifold learning literature, such as Locally Linear Embedding (LLE) (Roweis & Saul, 2000), Laplacian Eigenmaps (Belkin & Niyogi, 2003) and Sparse Manifolc Clustering and Embedding (SMCE) (Elhamifar & Vidal, 2011), to establish the local neighborhooc in the manifold. d indicates the support distance. For two vectors u, v of the same size, their support distance is defined below:\n|u| d(u,v) = (Iut=0,vtF0+1IutF0,vt=0) t=1\nwhere I is the indicator function. When the support distance between Z' and Z is small fo nonzero Ai, x, and x, choose similar atoms of the dictionary for sparse representation. Therefore. SRSC captures the locally linear structure of the data manifold by encouraging nearby data to share. dictionary atoms, wherein the common atoms shared by nearby data serve as the basis of the local subspace.\nThe optimization problem of SRSC is presented below:\n||x-DZ||2+A||Z'|1+yRA(Z) s.t.||D||1,k=1,..., min L(D,Z) = D,Z\nwhere y > 0 is the weighting parameter for the support regularization term. Similar to (Lee et al 2006), problem (5) is optimized alternatingly with respect to the dictionary D and the sparse codes Z respectively with the other variable fixed..\nIn order to obtain the sparse codes with locally similar support so as to capture the locally linear structure of the data manifold, we propose Support Regularized Sparse Coding (SRsC), which uses support distance to measure the distance between the sparse codes of nearby data. Given a proper symmetric similarity matrix A, the sparse codes Z that capture the locally linear structure of the manifold minimizes the following support regularization term:\nn n 1 RA(Z) = Ai;d(Z',Z) 2 i=1 j=1\nn |xi-DZ'|I2+ X||Z'||1+yRA(Z min Z i=1\nmin F(Z' xi -DZ'II2 +A||Z'||1+yRA(Z') Zi\nInspired by recent advances in solving non-convex optimization problems by proximal linearizec method (Bolte et al., 2014), proximal gradient descent method (PGD) is used to optimize the nonconvex problem (8). Although the proximal mapping is typically associated with a lower semicontinuous function (Bolte et al., 2014) and it can be verified that RA is not always lower semicontinuous, we can still derive a PGD-styple iterative method to optimize (8).\nthen GA. indicates the degree to which Zk; is discouraged to be nonzero and it can be verified that\nP RA(Z)=) Gki ;IZkiF0 k=1\nSince each indicator function Izk,o is lower semicontinuous, RA is lower semicontinuous if. with bracket indicates the iteration number of PGD or the iteration number of the coordinate descent without confusion. The PGD-style iterative method for optimizing (8) is as follows:.\nargminHk(v) : ux O or uz =O and GA Zki vE{uk,0} ux=0 and GA<0 E\nTS Hk(v) +X|v]+yGA;IvF0 2\nu = max{|Zi\nwhere o means element-wise multiplication\nProposition 1 shows that the PGD-style iterative method decreases the value of the objective functio. in each iteration.\nRA(Z) is equal to the right hand side of (9) up to a constant.\nX -Dz|? s.t.||D|1,k=1,.,P min D\nIn each step of coordinate descent, the optimization is performed over the i-th column of Z, while fixing all the other sparse codes {Z}ii. For each 1 i n, the optimization problem for Z' is. below:\n1 T. 1 Zi D'xi TS\nfor k = 1, ..., p and e is any real number such that e 0 and H() H(Zk . Hk and u are defined below:\n)\nVQ(y)-VQ(z)|l2 < s||yz[2, Vy,z E R\nAlgorithm 1 Support Regularized Sparse Coding\nInput: The data set X = {x}=1, the parameter X, , maximum iteration number M for the alternating metho. over D and Z, and maximum iteration number M, for coordinate descent on Z, maximum iteration numb Mp for the PGD-style iterative method on each Z' (i = 1,... ,n).. and stopping threshold E. 1: m = 1 2: while m < M do Perform coordinate descent to optimize (7) and obtain Z(m) with fixed D(m-1). In i-th (1 i n) ste 3: of each iteration of coordinate descent, solve (8) using the PGD-style iterative method (10) and (11) t update Z' in each iteration of the PGD-style iterative method.. 4: Optimize (6) using Lagrangian dual and obtain D(m) with fixed Z(m) . 5: if|L(D(m),Z(m)) - L(D(m-1), Z(m-1)|< E then 6: break 7: else 8: m = m + 1. 9: end if 10: end while Output: the support regularized sparse codes Z when the above iterations converge or maximum iteratio.\nOutput: the support regularized sparse codes Z when the above iterations converge or maximum iteration number is achieved."}, {"section_index": "3", "section_name": "TIME COMPLEXITY", "section_text": "Algorithm 1 describes the algorithm of SRsC. We solve the ordinary sparse coding problem (1) by. the online dictionary learning method (Mairal et al., 2009) and use the dictionary and the sparse codes as the initialization D(0) and Z(0) for the alternating method in Algorithm 1. In Algorithm 1, the. time complexity of optimization over the sparse codes is O(M M, Mpndp2), and time complexity of optimization over the dictionary using Newton's method to solve the Lagrangian dual problem is. M (nn2 + T. n(3p2.807 + 2dp2 + dnp) where T . is the maximum iteration number.\nIn (10), > 1 is a constant and s is the Lipschitz constant for the gradient of function Q(), namely\nThe PGD-style iterative method starts from t = 1 and continues until the sequence {F(zi(t))} converges or maximum iteration number is achieved. When the iterative method converges or terminates for each Z', the step of coordinate descent for Z' is finished and the optimization algorithm proceeds to optimize other sparse codes.\nand the optimization over the sparse code of each data point by the PGD-style iterative method (10 and (11) is almost as efficient as the widely used Iterative Shrinkage and Thresholding Algorithr (ISTA) (Daubechies et al., 2004; Beck & Teboulle, 2009). Note that step (10) and (13) are required by both our method and ISTA; compared to ISTA, the extra operations incurred by our PGD-style iterative method (10) and (11) are only the arithmetic operations with time complexity 20p foi evaluating the function H() defined in (12) for k = 1,...,p. More specifically, evaluating the value of function Hg(v) takes 10 arithmetic operations and two evaluations at v = ug and v = ( are needed. Since a compact dictionary is preferred by the extensive study of the sparse coding anc dictionary learning literature and the dictionary size p 500 is adopted throughout our experiments our PGD-style iterative method only incurs extra operations of constant time complexity compared to ISTA while learning supported regularized sparse codes. In Section 4, we propose Deep-SRSC as a fast approximation of SRSC with considerable speedup for obtaining the sparse codes of the new data or the test data (see more details in Section 4.1). Furthermore, we conduct the empirica study and show that the parallel coordinate descent method, which updates the codes of a group of P data points in parallel and provides P times speedup over the coordinate descent method used in Section 2.1.2 and Algorithm 1, exhibits almost the same performance as the coordinate descent method for the clustering task on the test set of the CIFAR-10 data. Please refer to the details in the subsection \"Deep-SRSC with the Second Test Setting (Referring to the Training Data)\" in the Appendix."}, {"section_index": "4", "section_name": "2.2 RELATED WORK: l2 REGULARIZED SPARSE CODING (l2-RSC)", "section_text": "The manifold assumption (Belkin et al., 2006) is usually employed by existing regularized sparse. coding methods (Liu et al., 2010; He et al., 2011; Zheng et al., 2011; Gao et al., 2013) to obtair the sparse code according to the manifold structure of the data. Interpreting the sparse code of a. data point as its embedding, the manifold assumption in the case of sparse coding for most existin, methods requires that if two points x; and x; are close in the intrinsic geometry of the submanifold. their corresponding sparse codes Z' and Z are also expected to be similar to each other in the sense. of e2-distance (Zheng et al., 2011; Gao et al., 2013). In other words, z varies smoothly along the. geodesics in the intrinsic geometry. Based on the spectral graph theory (Chung, 1997), extensive. literature uses graph Laplacian to impose local smoothness of the embedding and preserve the local. manifold structure (Belkin et al., 2006; Zheng et al., 2011; Gao et al., 2013)..\nn n 1 Aij||Z' -Z|l3 2 i=1 j=1\nwhere the l2-norm is used to measure the distance between sparse codes, and A is the same as that in Section 2.1. LA = DA A is the graph Laplacian associated with the similarity matrix A. the degree matrix D a is a diagonal matrix with each diagonal element being the sum of the\nmin L(e)(Z) = Yl|xi-DZ'II?+XJ|Z'll1+y(e (Z) s.t.||D<1,k=1,.,P )R D,Z"}, {"section_index": "5", "section_name": "ADVANTAGE OF SRSC OVER l2-RSC", "section_text": "Although e2-RSC imposes the local smoothness on the sparse codes, it does not capture the locall linear structure of the data manifold. By promoting the smoothness on the support of the sparse code rather than their l2-distance, SRSC encodes the locally linear structure of the manifold in the sparse codes while reserving freedom in the sparse representation of the data with no constraints on th\nThe sparse code Z that captures the local geometric structure of the data in accordance with the manifold assumption by graph Laplacian minimizes the following l2 regularization term, or the Laplacian regularization term:\nelements in the corresponding row of A, namely (DA)i = As. To the best of our knowledge\nsuch e2 regularization is employed by most methods that use graph regularization for sparse coding Incorporating the l2 regularization term into the optimization problem of sparse coding (1), the formulation of l2 Regularized Sparse Coding (l2-RSC) is presented below\nmagnitude of the sparse codes. Moreover, as pointed out by (Wang et al., 2015), support regularizatior offers robustness to noise for sparse coding. In SRSC, all the data consult their neighbors for choosing the dictionary atoms rather than choosing the atoms on their own, and the sparse codes of the noisy data are suppressed since they are forced to choose similar or the same atoms as the nearby clean data instead of choosing the atoms in the interests of representing themselves.\noptimal solution which is a critical point of the objective (8). By connecting the support regularized function to the capped-e1 norm and the nonconvexity analysis of the support regularization term, we present the bound for e2-distance between the sub-optimal solution and the globally optimal solution. to (8) in Theorem 1. Note that our analysis is valid for all 1 < i < n..\nWe first have the following result that the support regularization function (18) is lower semicontinuous if and only if all the coefficients c are nonnegative.\nProposition 2. The support regularization function (18) is lower semicontinuous if and only if al the coefficients c are nonnegative.\nZki (t) -argminH(v),k=1,...,P vE{uk,0}\nwhich is equivalent to the updates rules in the ordinary proximal gradient descent method. It is X, if the number of its neighbors with zero k-th element of the sparse codes is larger than that witl nonzero k-th element of the sparse codes, which indicates that the neighbors of x; suggest that a zerc the penalty if the sparse code element Zf. is nonzero while the neighbors of x; suggest that Z. = 0 i.. preferable. Intuitively, this situation happens when there is conflict between choosing the support o. the code solely by the data point itself and the suggestion of its neighbors; if the point is an outlier o suffering from noise, the optimization can help that point make a sensible choice by considering the suggestion of its neighbors. We observe that GA 0 for k = 1, ... , p happens in all the data sets used in this paper.\nIn the following lemma, we show that the sequence {Zi(t)}. '}t generated by (19) and (20) converges to a critical point of F(Z), denoted by Z'. Denote by Zi* the globally optimal solution to the original optimization problem (8). The following lemma also shows that both Z' and Zi* are local solutions. to the capped-l' regularized problem (21). Before stating the lemma, the following definitions are introduced which are essential for our analysis..\nIt can be observed that optimization by coordinate descent over the sparse code in Section 2.1.2 is important for the overall optimization of SRSC, and each step of the coordinate descent (8) is a difficult nonconvex problem and crucial for obtaining the support regularized sparse code, where the nonconvexity comes from the support regularization term RA(Z) (9). Therefore, the optimization of (8) plays an important role in the overall optimization of SRSC. In the previous section, a PGD style iterative method is proposed to decrease the value of the objective in each iteration. In this section, we provide further theoretical analysis on the optimization of problem (8) when GA, 0 for k = 1, . .. , p. This condition is equivalent to the condition that the support regularization function\np Rc(v) = Ck IIvkF0 k=1\nis lower semicontinuous, where c E IRP is the coefficients and 1A.. Under this condition. we\nTherefore, if GA, 0 for k = 1, ... , p, the support regularization term RA (Z) is lower semicontin- uous with respect to Z' in (9). In this case, the PGD-style iterative method proposed in Section 2.1.2 for each iteration t > 1 becomes\nZ DZ D Xi\nDefinition 1. (Critical points) Given the non-convex function f : IRn -> R U {+oo} which is a proper and lower semi-continuous function\nmin Lcapped-e1 xi -D||2 + X|||1 + T(;b BERp\nb k=1\nDefinition 2. (Local solution) A vector 3 is a local solution to the problem (21) ij\nwhere P(;b)=[Pi(i;b),P(2;b),...,Pp(p;b)]', Pk(t;b) = X[t + Tk(t;b) j\nNote that in the above definition and the following text, Pk(t: b) can be chosen as any value between the right differential Pt(t+; b) (or Pk(t+; b)) and left differential Pt(t-; b) (or P(t-; b)) for Ot Ot k = 1, ..., p.\nDefinition 3. (Degree of Nonconvexity of a Regularizer) For k > 0 and t E IR, define\nfy-fx-u,y-x lim sup. l|y - x| yFx,y->x\n0f(x)={uER:3x >x,f(x )->f(x),uF E0f(xk)->u}\nAlso, we are considering the following capped-e1 regularized problem, which replaces the indicator. function in the support regularization term RA(Z) with the continuous capped-l' regularization term T:\n9(t,):=sup{-sgn(s-t)(P(s;b)- P(t;b))-ks-t S\n< b< min{min|Z} max , min |Z max aq aQ jES; kSi,GA F0 zi=zi-)+'jes* kS*,GA azi ki. az? k\nFigure 3: Illustration of Deep-SRsC for approximate Support Regularized Sparse Coding\nInspired by the PGD-style iterative method (1O) and (11) for SRSC and the LISTA network, w propose Deep Support Regularized Sparse Coding (Deep-SRsC) illustrated in Figure 3, which i a neural network that produces the approximate support regularized sparse codes for SRSC. Th goal of Deep-SRsC is to approximate the sparse codes of the input data in a fast way by feeding th data through the Deep-SRSC network, instead of running the iterative optimization algorithm fc SRSC in Section 2.1. To achieve this goal, the Deep-SRSC network is trained on the training dat by minimizing the squared distance between the predicted codes of the training data by the networ and their ground truth codes. The network design of Deep-SRSC is in accordance with the propose PGD-style iterative method. When W = DT, S = I 1D D where L = Ts, then each stage i the recurrent structure of Deep-SRSC implements one iteration of PGD-style iterative method, i. (10) and (11). In Deep-SRSC, W, S and L are to be learned by the network rather than compute from a pre-computed dictionary D, and S is shared over different layers. The min-pooling neuro in Deep-SRSC outputs the result of arg min H(v) or e, according to the update rule (11). Figure vE{uk,0}\nDenote the training data by x1, . .., Xm, and let Zsr be the ground truth support regularized sparse. codes of the training data which are obtained by the optimization method introduced in Section 2.1 Let fsr be the Deep-SRSC encoder which produces the approximate support regularized sparse. code z = fsr(x, Osr), where Osr = (W, S, L) denotes the parameters of Deep-SRSC. Then the. parameters of Deep-SRSC are learned by minimizing the cost function which measures the distance. between the predicted approximate support regularized sparse codes and the ground truth ones.\nUsing the degree of nonconvexity of the regularizer P, we have the following theorem showing that the sub-optimal solution Zi obtained by our PGD-style iterative method can be close to the globally optimal solution to the original problem (8), i.e. Zi*. In the following text, B1 indicates a submatrix of B whose columns correspond to the nonzero elements of I, and min() indicates the smallest singular value of a matrix.\nof B whose columns correspond to the nonzero elements of I, and min() indicates the smallest singular value of a matrix. Theorem 1. (Sub-optimal solution is close to the globally optimal solution) For any 1 < i n, let E, = S, U S+. Suppose GA 0 for k = 1,...,p, De, is not singular with ko = Omin(De,) > 0, k? > k > 0, and b is chosen according to (23) as in Lemma 1. Let S; = (S \\ S+) U (S+ \\ St) be the symmetric difference between S; and S, then i kES; nSi kES;\\Si (24) where t E IRP, tx = 2XIIz *zg<0 + 01Iz *zi,>o for k E S; n S, and tx = 0 for all other k. Remark 2. Note that the bound for distance between the sub-optimal solution and the globally optimal solution presented in Theorem 1 does not require typical Restricted Isometry Property (RIP O and Zi* and Zi has the same sign in the intersection of their support, the sub-optimal solution i _ kb are small positive numbers and Zi* and Z' has similar sign in the intersection of their support, Z' is close to the globally optimal solution.\nZ||2 -k|Zki-b|})2+ -kb})2)2 +l|t||2 (max{0, 7GA (max{0, 7GA kES;nS kESi\\Si\nThe approximate codes of the new data, or the test data, are obtained by feeding the new data through the Deep-SRSC network learned on the training data. We provide two test settings below, depending on whether training data are referred to in the test process..\n1) In the first setting where the training data are not referred to, the test data are a group of data. points. The test data and the KNN graph over them are fed into the Deep-SRSC network to obtain the approximate codes of the test data. The locally linear manifold structure of the test data is encodec. in the KNN graph over the test data. This setting is potentially more suitable for the situation of. limited storage where the training data and their codes do not need to be stored in the test process This setting may not be suitable for the test data that do not reliably reflect the locally linear manifold. structure (e.g. in the case of a very small amount of test data), and in this case the second setting below is a better choice.\n2) In the second setting where training data are referred to, the approximate code of each data poin is obtained by feeding that point and the KNN graph over that point and the training data into the Deep-SRSC network. The code of each test point is reliably obtained by referring to its nearest neighbors in the training data and this process is independent of the factor that whether the test dat: reflect the locally linear manifold structure."}, {"section_index": "6", "section_name": "4.1 DEEP-SRSC AS FAST ENCODER", "section_text": "It should be emphasized that Deep-SRSC is a fast encoder for SRSC when obtaining the codes of th new data (or test data). Each layer of Deep-SRSC resembles one iteration of the PGD-style iterativ method (10) and (11), and the computational cost of feeding forward a data point through one layer i the same as that of executing one iteration of the PGD-style iterative method for that point. Therefore the feed-forward process of obtaining the sparse codes of the new data using l-layer Deep-SRS( is around Mp times faster than the PGD-style iterative method used in Algorithm 1, where M is the maximum iteration number for the PGD-style iterative method. In the experimental result shown in the next section, Deep-SRSC with different number of layers are employed to produce th approximate support regularized sparse codes, and 6-layer Deep-SRSC achieves minimum predictio error. With Mp = 50 throughout our experiments, Deep-SRsC is around 50 ~ 8.3 times faster thar the PGD-style iterative method. Our analysis in this subsection holds for both test settings.\nTable 1: Clustering results on UsPS handwritten digits database. c in the left column is the cluster number, i. the first c clusters of the entire data are used for clustering.\nData Set Measure KM SC Sparse Coding e2-RSC SRSC AC 0.6274 0.3347 0.9903 0.9903 0.9944 COIL-20 NMI 0.7533 0.5667 0.9879 0.9879 0.9933 AC 0.5221 0.2372 0.6979 0.6979 0.7267 COIL-100 NMI 0.7633 0.5410 0.8837 0.8837 0.8876 AC 0.3868 0.3375 0.4003 0.4023 0.4123 UCI Gesture Phase Segmentation NMI 0.1191 0.1300 0.1164 0.1164 0.1187\nstochastic gradient descent and back-propagation. The batch size is set to 1 so as to simulate the coordinate descent method for optimization over the sparse codes in Section 2.1.2. The adjacency matrix of the KNN graph over the training data is required as input for training the network.\nUSPS Measure KM SC e2-RSC Sparse Coding SRSC # Clusters AC 0.9243 0.4514 0.9869 0.9869 0.9880 c = 4 NMI 0.7782 0.4160 0.9429 0.9429 0.9467 AC 0.7130 0.4325 0.7781 0.7781 0.9723 c = 6 NMI 0.6845 0.4865 0.8507 0.8507 0.9135 AC 0.7294 0.4227 0.8163 0.8163 0.9645 c = 8 NMI 0.6851 0.4811 0.8669 0.8669 0.9027 AC 0.6878 0.4041 0.8178 0.8287 0.8293 c = 10 NMI 0.6312 0.4765 0.8321 0.8398 0.8471\nTable 2: Clustering results on various data sets\nTable 3: Prediction error (average squared error between the predicted codes and the ground truth codes) of Deep-SRSC with different depth and different dictionary size on the test set of USPS data, using the first test Setting\nIng Dictionary Size 1-layer 2-layer 6-layer p 100 0.06 0.04 0.04 p = 300 0.14 0.09 0.07 = 500 0.24 0.12 0.11 p Training Error versus Epoch Number for Deep-SRsC with p=100 Training Error versus Epoch Number for Deep-SRsC with p=300 Training Error versus Epoch Number for Deep-SRsC with p=500 0.22 1-layer 0.2 1-layer 1-layet 0.12 2-layer 2-layer 0.3 2-layer 6-layer 0.18 6-layer 6-layer 0.1 0.16 0.25 0.14 0.08 0.12 0.2 Err 0.06 0.1 0.15 0.04 0.08 0.1 0.06 0.02 0.04 0.05 0.02 150 200 0 0 50 100 250 300 50 100 150 200 250 300 50 100 150 200 250 300 Epoch Number Epoch Number Epoch Number\nFigure 4: Training error of Deep-SRSC with dictionary size p = 100, p = 300, and p = 500. Th test error of Deep-SRSC is shown in Figure 5 in the appendix.\nDictionary Size Measure KM SC Sparse Coding e2-RSC SRSC 6-Layer Deep-SRSC AC 0.6020 0.3279 0.6363 0.6363 0.7105 0.7155 p = 100 NMI 0.5522 0.4372 0.7011 0.7011 0.7068 0.6778 AC 1 1 0.6408 0.6462 0.7225 0.7000 p = 300 NMI 1 1 0.7011 0.7011 0.7045 0.6817 AC 1 1 0.6263 0.6268 0.6248 0.6836 p = 500 NMI 1 1 0.6872 0.6898 0.7221 0.6537"}, {"section_index": "7", "section_name": "5.1 CLUSTERING PERFORMANCE", "section_text": "In this subsection, the superiority of SRSC is demonstrated by its performance in data clustering on various data sets, e.g. USPS handwritten digits data set, COIL-20, COIL-100 and UCI Gesture Phase Segmentation data set. Two measures are used to evaluate the performance of the clustering methods i.e. the Accuracy (AC) and the Normalized Mutual Information (NMI) (Zheng et al., 2004). SRSC is compared to K-means (KM), Spectral Clustering (SC), Sparse Coding and l2-RSC in Section 2.2 Throughout all the experiments, we set K = 3 for building the adjacency matrix A of KNN graph, dictionary size p = 300 and X = 0.1 for both l2-RSC and SRSC. We also set (e2) = 1 which is the suggested default value in (Zheng et al., 2011), and M = Mz = 5 and Mp = 50 in Algorithm 1. The default value of the weight for support regularization term of SRSC is = 0.5. SRSC is implemented by both MATLAB and CUDA C++ with extreme efficiency, and the code is published on GitHub: https://github.com/yingzhenyang/SRsc.\ninner product of their corresponding sparse codes, namely Y; = max{0, Zi' Z}, the second one i. Yij = Aijq qz, where q is a binary vector of the same size as v with element 1 at the indices of. nonzero elements of v. The second similarity measure is name the support similarity and it consider. the number of common dictionary atoms chosen by the sparse codes. Spectral clustering is performec on the similarity matrix Y to obtain the clustering result of SRSC, and the best performance among. the two similarity measures is reported. The same procedure is performed by all the other sparse coding based methods to obtain clustering results. The clustering results of various methods are shown in Table 1.\nTable 4: Clustering results on the test set of USPS data with different dictionary size p\nThe USPS handwritten digits data set is comprised of n = 9298 handwritten images of ten digits. from 0 to 9, and each image is of size 16 16 and represented by a 256-dimensional vector. The whole data set is divided into training set of 7291 images and test set of 2007 images. We run. Algorithm 1 to obtain the support regularized sparse code Z, then build a n n similarity matrix Y. over all the data. Two similarity measure are employed: the first similarity is the positive part of the\nCOIL-20 Database has 1440 images of resolution 32 32 for 20 objects, and the background i removed in all images. The dimension of this data is 1024. Its enlarged version, COIL-100 Database. contains 100 objects with 72 images of resolution 32 32 for each object. The images of eacl. object were taken 5 degrees apart when each object was rotated on a turntable. The UCI Gesture. Phase Segmentation data set contains the gesture information of three users when they told storie. of some comic strips in front of the Microsoft Kinect sensor. We use the processed file providec. by the original data consisting of 9873 frames, and the gesture information in each frame is the. vectorial velocity and acceleration of left hand, right hand, left wrist, and right wrist, represented by a 32-dimensional vector. The clustering results on these three data sets are shown in Table 2. It can be. observed from Table 1 and Table 2 that SRsC always produces better clustering accuracy than othe. competing methods, due to its capability of capturing the locally linear manifold structure of the data. and robustness to noies. In the appendix, we further show the performance of different sparse coding. based methods with different dictionary size on COIL-100 data set in Table 5, and investigate the. parameter sensitive of SRSC by demonstrating its performance with varying y and K in Table 6.."}, {"section_index": "8", "section_name": "5.2 APPROXIMATION BY DEEP-SRSC", "section_text": "In this subsection, Deep-SRSC is employed as a fast encoder to approximate the support regularizec sparse codes of SRSC on the USPS data set. Throughout this subsection, we show results using the first test setting introduced in Section 4, i.e. test without referring to the training data. Additiona experimental results on the performance of Deep-SRSC with the second test setting, including th application to semi-supervised learning by label propagation (Zhu et al., 2003), are shown in the appendix.\nThe Deep-SRSC network is trained on the training set of the USPS data comprising 7291 images We adopt three depth settings wherein Deep-SRSC has 1 layer, 2 layers, and 6 layers respectively We first run SRSC on the training set of USPS data to obtain the dictionary Dsr and the support regularized sparse codes Zsr of the training data. Then the optimization problem (7) is solved by the PGD-style iterative method in Section 2.1.2, where X is the test data, A is the adjacency matrix of the KNN graph over the test data, to obtain the support regularized sparse codes Zsr,test of the test data with dictionary Dsr. Zsr is used as the ground truth support regularized sparse codes to train Deep-SRSC, and Zsr,test serves as the ground truth codes of the test data. The approximate codes of the test data of the USPS data are obtained by feeding forward them into the Deep-SRSC network together with the KNN graph over the test data, and the prediction error of Deep-SRSC is the average of the squared error between the predicted codes and Zsr,test. Figure 4 illustrates the training error of Deep-SRSC w.r.t. the epoch number for 1 layer, 2 layers, and 6 layers respectively, and Figure 5 in the appendix illustrates the test error of Deep-SRSC. For each depth setting, Deep-SRSC is trained with 300 epoches, and testing is performed for every 5 epoches during training. It can be observed that deeper Deep-SRSC leads to smaller training and test error. Deep-SRSC is implemented with TensorFlow (Abadi et al., 2016). The initial learning rate is set to 10-4, and divided by 10 at 100-th epoch and 200-th epoch, so the final learning rate is 10-6 upon the termination of the training.\nTable 3 shows the prediction error of Deep-SRSC for different dictionary size p and different number of layers. It can be observed that Deep-SRsC with more layers demonstrates smaller prediction error for the same dictionary size due to its better representation capability, and smaller dictionary size leads to less prediction error for the same number of layers due to the reduced difficulty of representation. Moreover, the codes predicted by 6-layer Deep-SRsC are used to perform clustering on the test data because of its minimum prediction error, with comparison to the performance of sparse coding and e2-RSC shown in Table 4 with respect to different dictionary size. For either sparse coding or l2-RSC, the dictionary is firstly learned on the training data, then the sparse codes of the test data obtained with respect to that dictionary are used to perform clustering on the test set of USPS data. We can see that SRSC together with its approximation, 6-layer Deep-SRSC, achieve the highest accuracy and NMI. In addition, a reasonably large dictionary benefits SRSC, e.g. increasing p from 100 to 300 boosts its accuracy, since the dictionary atoms serve as the basis for the locally linear structures (local subspaces) of the data manifold and a sufficiently large dictionary size is favorable for modeling all such locally linear structures. On the other hand, a too large dictionary (such as p = 500) imposes much difficulty on the optimization which can even hurt the performance of SRSC. e2-RSC and regular sparse coding.\nWe propose Support Regularized Sparse Coding (SRSC) which captures the locally linear manifold structure of the high-dimensional data for sparse coding and enjoys robustness to noise. SRSC achieves this goal by encouraging nearby data in the manifold to share dictionary atoms. The optimization algorithm of SRSC is presented with theoretical guarantee for the optimization ovei the sparse codes. In addition, we propose Deep-SRsC, a feed-forward neural network, as a fast encoder to approximate the support regularized sparse codes produce by SRSC. Experimental results demonstrate the effectiveness of SRSC by its application to data clustering, and show that Deep-SRSC renders approximate codes for SRSC with low prediction error. The approximate codes generated by 6-layer Deep-SRSC also deliver compelling empirical performance for data clustering."}, {"section_index": "9", "section_name": "REFERENCES", "section_text": "Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems SIAM J. Img. Sci., 2(1):183-202, March 2009. ISSN 1936-4954. doi: 10.1137/080716542. URL http : //dx.d01.0rg/10.1137/080716542.\nMikhail Belkin and Partha Niyogi. Laplacian eigenmaps for dimensionality reduction and data representatior Neural Computation, 15(6):1373-1396, 2003.\nMikhail Belkin, Partha Niyogi, and Vikas Sindhwani. Manifold regularization: A geometric framework for. learning from labeled and unlabeled examples. Journal of Machine Learning Research, 7:2399-2434, 2006\nJerome Bolte, Shoham Sabach, and Marc Teboulle. Proximal alternating linearized minimization for nonconvex and nonsmooth problems. Math. Program., 146(1-2):459-494, August 2014. ISsN 0025-5610. doi: 10.1007/s10107-013-0701-9.\nJoseph K. Bradley, Aapo Kyrola, Danny Bickson, and Carlos Guestrin. Parallel coordinate descent for 11 regularized loss minimization. In Proceedings of the 28th International Conference on Machine Learning,. ICML 2011, Bellevue, Washington, USA, June 28 - July 2, 2011, pp. 321-328, 2011.\nEmmanuel J. Cands. The restricted isometry property and its implications for compressed sensing. Compte. Rendus Mathematique, 346(910):589 - 592, 2008. 1SSN 1631-073X\nF. R. K. Chung. Spectral Graph Theory. American Mathematical Society, 1997.\nI. Daubechies, M. Defrise, and C. De Mol. An iterative thresholding algorithm for linear inverse proble with a sparsity constraint. Comm. Pure Appl. Math., 57(11):1413-1457, 2004. ISSN 1097-0312. do 10.1002/cpa.20042. URL http://dx.doi.0rg/10.1002/cpa.20042.\nEhsan Elhamifar and Rene Vidal. Sparse manifold clustering and embedding. In NIPS, pp. 55-63, 2011.\nShenghua Gao, Ivor Wai-Hung Tsang, and Liang-Tien Chia. Laplacian sparse coding, hypergraph laplaciar sparse coding, and applications. IEEE Trans. Pattern Anal. Mach. Intell., 35(1):92-104, 2013.\nXiaofei He, Deng Cai, Yuanlong Shao, Hujun Bao, and Jiawei Han. Laplacian regularized gaussian mixtur model for data clustering. Knowledge and Data Engineering, IEEE Transactions on, 23(9):1406-1418, Sep 2011. ISSN 1041-4347. Go Irie, Zhenguo Li, Xiao-Ming Wu, and Shih-Fu Chang. Locally linear hashing for extracting non-linea manifolds. In 2014 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2014, Columbu OH, USA, June 23-28, 2014, pp. 2123-2130, 2014. doi: 10.1109/CVPR.2014.272.\nJialu Liu, Deng Cai, and Xiaofei He. Gaussian mixture model with local consistency. In AAAI, 2010.\nSam T. Roweis and Lawrence K. Saul. Nonlinear dimensionality reduction by locally linear embedding SCIENCE, 290:2323-2326, 2000\nZilei Wang, Jiashi Feng, and Shuicheng Yan. Collaborative linear coding for robust image classification. Int. J Comput. Vis., 114:322333, 2015.\nJianchao Yang, Kai Yu, Yihong Gong, and Thomas S. Huang. Linear spatial pyramid matching using sparst coding for image classification. In CVPR, pp. 1794-1801, 2009.\nTianzhu Zhang, Bernard Ghanem, Si Liu, Changsheng Xu, and Narendra Ahuja. Low-rank sparse coding for image classification. In IEEE International Conference on Computer Vision, ICCV 2013, Sydney, Australia, December 1-8, 2013, pp. 281-288, 2013. doi: 10.1109/ICCV.2013.42.\nMiao Zheng, Jiajun Bu, Chun Chen, Can Wang, Lijun Zhang, Guang Qiu, and Deng Cai. Graph regularized. sparse coding for image representation. 1EEE Transactions on Image Processing, 20(5):1327-1336, 2011.\nXin Zheng, Deng Cai, Xiaofei He, Wei-Ying Ma, and Xueyin Lin. Locality preserving clustering for image. database. In Proceedings of the 12th Annual ACM International Conference on Multimedia, MULTIMEDIA 04, pp. 885-891, New York, NY, USA, 2004. ACM.\nXiaojin Zhu, Zoubin Ghahramani, and John D. Lafferty. Semi-supervised learning using gaussian fields and. harmonic functions. In Machine Learning, Proceedings of the Twentieth International Conference (ICMI 2003), August 21-24, 2003, Washington, DC, USA, pp. 912-919, 2003."}, {"section_index": "10", "section_name": "PROOFS", "section_text": "Masayuki Karasuyama and Hiroshi Mamitsuka. Manifold-based similarity adaptation for label propaga tion. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neu ral Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States., pp. 1547-1555, 2013. URL http://papers.nips.cc/paper/ 500l-manifold-based-similarity-adaptation-for-label-propagation.\nHonglak Lee, Alexis Battle, Rajat Raina, and Andrew Y. Ng. Efficient sparse coding algorithms. In NIPS, pp 801-808, 2006.\nJulien Mairal, Francis R. Bach, Jean Ponce, and Guillermo Sapiro. Online dictionary learning for sparse coding In Proceedings of the 26th Annual International Conference on Machine Learning, ICML 2009, Montreal Quebec, Canada, June 14-18, 2009, pp. 689-696, 2009. doi: 10.1145/1553374.1553463.\nZhangyang Wang, Yingzhen Yang, Shiyu Chang, Qing Ling, and Thomas S. Huang. Learning A deep Loo en. coder for hashing. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence IJCAI 2016, New York, NY, USA, 9-15 July 2016, pp. 2174-2180, 2016.\ntwo functions H() and T(v) only differ at v = 0, arg minve{uz,o} H() is the optimal solution to minveR Hk(v) when ux # 0 or ux = 0 and GA 0.\nTS (t 7 2\nFAjlzi( 1+yRA(Zi(t) +yRA(Zi(\n2\nProof of Lemma 1. We first prove that the sequences {Zi(t)} t is bounded for any 1 i n. By Proposition 1,\n|x;-DZ 1+RA(\nDz 1+RA\nAgain we consider the case that GA, 0.\nQ(Zi(t) <Q(Zi(t- Zi(t) Zi(t-1)|2 (Zi(t\nIt follows that ||Zi(t)||1 is bounded, and ||Zi(t)||2 is also bounded. Since GA, 0 for k = 1, ...,p and the indicator function I.o is semi-algebraic function, RA() is also a semi-algebraic function and lower semicontinuous. Therefore, according to Theorem 1 by Bolte et al. (2014), {Zi(t) } t converges to a critical point of F(Z'), denoted by Zi.\nsgn(s-t)P(s;b)-P(t;b))k|s-t|0(t,\n0(t,x)|s-t|-(s-t)(P(s;b)-P(t;b))-x(s-t) (s-t)(P(s;b)-P(t;b))0(t,x)|s-t|+k(s-t)2\n0(t,)|s-t|-(s-t)(P(s;b)-P(t;b))-k(s-t)\ns-t)P(s;b)-P(t;b))0(t,x)|s-t|+x(s-t)\n|Zs.*- Zs.I' e(Zs.,k) + k||Zs.*- Zs,I2 +lIs,ns+ll2|I i OS\n< l|0(Zs,,k)|l2|Zs.* - Zg lI2 + k||Zs *-ZglI2 +l|||2|Is,ns+I\nk?|l4? < l|e(Zs.,k)||2lll|2 + k||lI2 + l|2llAs,nsrI\nWhen 2 0, we have\nl|e(Z.,k)||2 + l|s,nst >42\nle(Zs,,x)|l=> ki (max{0 K|Zki-b[}- b kES;nS, Gki A L max{0 6 kES;\\Si\n1 Gki lI4|2 -k|Zki-b{})2 (max{0 b kES;nSi (max{0 b kES;\\Si\nIt follows that. ATDTDA+T|4|2||DTDA+A|2 = 0 Also, by the proof of Lemma 1, for k E S; n S, since (D D)k = 2XIIzg *z4<o + 0IIz; *z >o we have r = (D'D)k. We now present another property on any nonconvex function P using the degree of nonconvexity in Definition 3: 0(t, ) := sups{-sgn(s - t)(P(s; b) - P(t; b)) x|s t|} on the regularizer P. For any s,t E IR, we have. -sgn(s-t)(P(s;b)-P(t;b))-k|s-t|0(t,) by the definition of 0. It follows that. 0(t,x)|s-t|-(s-t)(P(s;b)-P(t;b))-k(s-t)2 -(s-t)P(s;b)-P(t;b))0(t,k)|s-t|+k(s-t) (28) Applying (28) with P = Pr for k = 1, ..., P, we have. TDD-AT =-AJs, -AS,nsyAs,nst <|Zs,*-Zs |Te(Zs,,k) + k|Zs *- Zs,I2 +|As,ns l|2||s,nst|Iz ||0(Zg,x)|2|IZs * - Zg,l|2 + k|Zs *-Zg|I2 +||A||2||s,ns+|I2 l|e(Zg,x)|2|Il|2 +k||lI2 + l|l|2lls,nsflI2 (29) On the other hand, ' D' D ?||||2. It follows from (29) that. k?|A|I l|e(Zg,,k)|2|4l|2 + k|||I2 +|A||2|As,ns;lIz When ||||2 0, we have ko||4|2 l|(Zs,,k)||2+ k||l|2 +l|As,ns lI l|0(Zs,x)|I2 +||s,ns|2 >4|2 (30) k% - K b kES;nSi (max{0, 2GA - nb}) (31) b kES;\\Si Therefore, 14|2 (max{0, 7GA_ k\\Zki-b|})2+ kES;nS, (max{0, 7GA sb} (32) kES;\\Si where k = -(DTD)k for k = S. O S+. This proves the result of this. 21\nTest Error versus Epoch Number for Deep-SRsC with p=100 Test Error versus Epoch Number for Deep-SRSC with p=300 Test Error versus Epoch Number for Deep-SRSC with p=500 0.18 0.28 0.2 1-layer 1-layer 1-layer 0.16 2-layer 2-layer 0.26 2-layer layer 0.18 -layer 6-layer 0.24 0.14 0.22 0.12 6 0.1 0.18 0.12 0.08 0.16 0.1 0.14 0.06 0.08 0.12 0.04 0.1 0.06 0 50 100 150 200 250 300 0 50 100 150 200 250 300 50 100 150 200 250 300 Epoch Number Epoch Number Epoch Number\nFigure 5: Test error of Deep-SRsC with dictionary size 100, p = 300,and p = 500\nTable 6: Parameter sensitivity with respect to y and K on USPS data set\nVarying y with default K = 3 Measure KM SC Sparse Coding e2-RSC SRSC AC 0.6878 0.4041 0.8178 0.8287 0.8229 = 0.1 NMI 0.6312 0.4765 0.8321 0.8398 0.8370 AC 0.8178 0.8287 0.8261 1 1 = 0.2 NMI 1 0.8321 0.8398 0.8439 1 AC 1 0.8178 0.8287 0.8251 = 0.3 - NMI 1 0.8321 0.8398 0.8441 AC 0.8178 0.8287 0.8258 1 = 0.4 NMI 0.8321 0.8398 0.8455 - AC 0.8178 0.8287 0.8293 1 = 0.5 NMI - 0.8321 0.8398 0.8471 - AC 1 - 0.8178 0.8287 0.8273 = 0.6 NMI - 1 0.8321 0.8398 0.8481 AC 1 0.8178 0.8287 0.8279 = 0.7 NMI 1 1 0.8321 0.8398 0.8489 AC 1 1 0.8178 0.8287 0.8282 = 0.8 NMI 1 - 0.8321 0.8398 0.8479 Varying K with default y = 0.5 Measure KM SC Sparse Coding e2-RSC SRSC AC 1 1 0.8178 0.8287 0.8293 K = 3 NMI 1 1 0.8321 0.8398 0.8471 AC 1 1 0.8178 0.8287 0.8216 K = 4 NMI 1 0.8321 0.8398 0.8487 AC 1 0.8178 0.8287 0.8243 K = 5 - NMI 0.8321 0.8398 0.8535 - 1 AC 0.8178 0.8287 0.8462 K = 6 - - NMI 0.8321 0.8398 0.7995 -"}, {"section_index": "11", "section_name": "MORE EXPERIMENTAL RESULTS", "section_text": "The test error of Deep-SRsC with different dictionary size. corresponding to Figure 4 showing the training error is illustrated in Figure 5. We vary the dictionary size and show the clustering results on COIL-100 data set in. Table 5, and we can see that SRSC always achieves highest accuracy and NMI with different dictionary size..\nTest Error versus Epoch Number for Deep-SRsC with p=100 Test Error versus Epoch Number for Deep-SRsC with p=300 Test Error versus Epoch Number for Deep-SRSC with p=500 0.18 0.28 0.2 1-layer 1-layet 0.26 A L-layer 0.16 2-layer 2-layer 2-layer 6-layer 0.18 6-layer 5-layer 0.24 0.14 0.22 0.12 JOJ E0.1 0.18 0.1 0.08 0.16 0.14 0.06 0.08 0.12 S 0.04 0.06 0.1 50 100 150 200 250 300 50 100 150 200 250 300 0 50 100 150 200 250 300 Epoch Number Epoch Number Epoch Number\nTable 5: Clustering Results on COIL-100 data with different dictionary size p\nDictionary Size Measure KM SC Sparse Coding e2-RSC SRSC AC 0.5221 0.2372 0.7010 0.7010 0.7344 p = 100 NMI 0.7633 0.5410 0.8834 0.8834 0.8950 AC - - 0.6979 0.6979 0.7267 p = 300 NMI - - 0.8837 0.8837 0.8876 AC - - 0.6979 0.6979 0.7117 p = 500 NMI 0.8839 0.8839 0.8856 - -\nTable 7: Clustering results on the test set of MNIST Data\n[able 8: Clustering results on the test set of CIFAR-10 Data\nIn addition, we investigate the parameter sensitivity of SRsC, and show in Table 6 the performance change while varying y, the weight for the support regularization term, and K, the number of nearest neighbors when building the KNN graph for the support regularization term, on the USPS data set. It can be observed that the performance of SRSC is stable over a relatively large range of X and K. SRSC often has the highest NMI while maintaining a very competitive accuracy."}, {"section_index": "12", "section_name": "DEEP-SRSC WITH THE SECOND TEST SETTING (REFERRING TO THE TRAINING DATA)", "section_text": "We demonstrate the performance of SRSC and Deep-SRSC with the second test setting (referring to the training. data) on clustering and semi-supervised learning. The ground truth code of the each test data point is computed. by performing the PGD-style iterative method to solve the problem (8) where x; is the test point, D is Dsr. obtained from the training data as in Section 5.2, A is the adjacency matrix of the KNN graph over the test. point and the training data. Table 9 shows the prediction error of Deep-SRSC for different dictionary size p and different number of layers on the USPS data, which is comparable to the case of the first test setting in Table 3.\nTwo more data sets are used in this subsection, i.e. MNIST for hand-written digit recognition and CIFAR-10 for image recognition. MNIST is comprised of 60000 training images and 10000 test images of ten digits from 0 to 9, and each image is of size 28 28 and represented as a 784-dimensional vector. CIFAR-10 consists of 50000 training images and 10000 testing images in 10 classes, and each image is a color one of size 32 32 Using the second test setting, Deep-SRsC is trained on the training set, and the codes of the test set predicted by 6-layer Deep-SRSC are used to perform clustering on the test set for MNIST and CIFAR-10 data, with comparison to other sparse coding based methods. The clustering results are shown in Table 7 and 8 respectively with dictionary size p = 300. We observe that SRsC and Deep-SRsC always achieve the best performance compared to other competing methods. We employ the fast deep neural network named CNN-F (Chatfield et al., 2014) trained on the ILSVRC 2012 data to extract the 4096-dimensional feature vector for each image in the CIFAR-10 data, and all the clustering methods are performed on the extracted features. In addition to the coordinate descent method employed in Section 2.1.2 and Algorithm 1 for the optimization of the sparse codes in SRSC, we further conduct the empirical study showing that the parallel coordinate descent method, which updates the coordinates in parallel for improved efficiency and fits the needs of large-scale data optimization leads to almost the same results as the coordinate descent method on the CIFAR-10 data. Instead of optimization with respect to the sparse code of a single data point in the coordinate descent method, the parallel coordinate descent method updates the sparse codes of P data points in parallel using the same rule as that in the coordinate descent method in Section 2.1.2 and Algorithm 1. While the parallel coordinate descent method is originally designed for convex problems (Bradley et al., 2011; Richtarik & Takac, 2016), it demonstrates almost the same empirical performance as the coordinate descent method for the clustering task on the test set of the CIFAR-10 data, with the accuracy of 0.4622 and NMI of 0.3864. P-parallel coordinate descent leads to P times speedup compared to the coordinate descent method. We choose P = 10 and the codes of the training data of CIFAR-10 are learned by the parallel coordinate descent method, and note that the optimization of the codes of the test data are inherently parallelizable due to the nature of the second test setting studied in this subsection.\nMoreover, Table 10 shows the prediction error of Deep-SRSC on the MNIST data and the CIFAR-10 data. It can be observed again that deeper Deep-SRSC network leads to smaller prediction error.\nWe also show the application to semi-supervised learning via label propagation (Zhu et al., 2003), a widel used semi-supervised learning method. Given the data {x1, X2, ..., Xt, Xt+1, ..., Xn} C IRd, the first l point {x1, x2,..., xt} are labeled and named the training data, and the remaining n - l points form the test dat for semi-supervised learning. Semi-supervised learning by label propagation aims to predict the labels of th test data by encouraging local smoothness of the labels in accordance with the similarity matrix over the entir data. The performance of label propagation depends on the similarity matrix. For each sparse coding base method, the similarity matrix Y over the entire data is built by the support similarity introduced in Section 5. Yij = Ajq qz;, and Z' is the code of data point x, for different sparse coding methods including th 6-layer Deep-SRsC with the second test setting. Label propagation is performed on the similarity matrix Y t obtain the labels of the test data, and the error rate is reported. Note that in the experiment of semi-supervise learning by label propagation, the codes of the test data of each data set are obtained first (e.g. the 10o00 tes images in the MNIST data). If x; belongs to the test data of a data set, its code is obtained by performing th the corresponding sparse coding optimization with the dictionary learned on the training data of that data se for SRSC and Deep-SRSC, such optimization also has the KNN graph over the test point x; and the training data as input. With the codes of all the data, the similarity matrix Y over the entire data is constructed. Then, randomly sampled subset of each class is labeled as the training data, with the other data serving as the test dat for semi-supervised learning.\nThe semi-supervised learning results of our methods are compared to that of the Gaussian kernel graph (Gaussian i.e. the KNN graph with the edge weight set by the Gaussian kernel; Sparse Coding (SC) and l--RSC; anc manifold based similarity adaptation (MBS) by Karasuyama & Mamitsuka (2013), one of the state-of-the-ar semi-supervised learning methods based on label propagation. MBS learns the manifold aligned edge similarit. by local reconstruction for label propagation.\nThe comparison results of semi-supervised learning by label propagation on the USPS data and the MNIST. data are shown in Figure 6 and 7, which illustrate the error rate of label propagation with respect to different. number of labeled data points in each class. We can observe from Figure 6 that SRSC and Deep-SRSC with the. second test setting lead to superior results on the application to semi-supervised learning, and the performance of SRSC and Deep-SRSC is always the best with respect to different dictionary size. It can also be observed from. Figure 6 and 7 that SRSC and Deep-SRSC have very similar performance, revealing the good quality of the. fast approximation by Deep-SRSC on the semi-supervised learning task. Furthermore, SRSC and Deep-SRSC. significantly outperform other baseline methods with the small number of labeled data points in each class, due. to the captured locally linear manifold structure..\nTable 9: Prediction error (average squared error between the predicted codes and the ground truth codes) of. Deep-SRsC with different depth and different dictionary size on the test set of the USPS data, using the second test setting\nTable 10: Prediction error (average squared error between the predicted codes and the ground truth codes) of Deep-SRSC with different depth on the test set of the MNIST data and CIFAR-1O data, using the second test setting\ntting Data Set 1-layer 2-layer 6-layer MNIST 0.15 0.12 0.10 CIFAR-10 0.16 0.13 0.13 Error w,r.t. Number of Labeled Samples by Label Propgation on Usps Data Error w,r.t. Number of Labeled Samples by Label Propgation on USPS Data Error w,r.t. Number of Labeled Samples by Label Propgation on UsPS D - Gaussian A Gaussian .- Gaussian -- MBS 0.35 --- MBS 0.35 0.35 --- MBS --e-- SC --e-- SC -e-- SC 4. L2RSC A-- L2RSC 4- L2RSC 0.3 -*--SRSC 0.3 -#-- SRSC 0.3 -*-- SRSC -Deep-SRSC --Deep-SRSC -= Deep-SRSC 0.25 0.25 0.25 o o 0.2 0.2 0.2 0.15 0.15 0.15 0.1 0.1 0.1 0.05 0.05 100 0.05 0 50 150 50 100 150 0 50 100 150 Number of Labeled Samples for Each Class Number of Labeled Samples for Each Class Number of Labeled Samples for Each Class\nFigure 6: Error rate of semi-supervised learning by label propagation on the UsPs data, with dictionary size p = 100, p = 300, and p = 500\nOrLabeleaSammples LaberPropgallononMN 0.7 .....Gaussian ---- MBS 0.6 --e-- SC -A--L2RSC --*--SRSC 0.5 ---Deep-SRSC 0.4 0.3 0.2 0.1 0 0 50 100 150 Number of Labeled Samples for Each Class.\nFigure 7: Error rate of semi-supervised learning by label propagation on the MNIST data with dictionary size p = 300"}] |
H178hw9ex | [{"section_index": "0", "section_name": "DYNAMIC STEERABLE FRAME NETWORKS", "section_text": "Jorn-Henrik Jacobsen', Bert De Brabandere?, Arnold W.M. Smeulders!\n1Department of Computer Science, University of Amsterdam 2ESAT-PSI, KU Leuven\nFilters in a convolutional network are typically parametrized in a pixel basis As an orthonormal basis, pixels may represent any arbitrary vector in Rn. In this paper, we relax this orthonormality requirement and extend the set of viable bases to the generalized notion of frames. When applying suitable frame bases to ResNets on Cifar-10+ we demonstrate improved error rates by substitution only By exploiting the transformation properties of such generalized bases, we arrive at steerable frames, that allow to continuously transform CNN filters under ar- bitrary Lie-groups. Further allowing us to locally separate pose from canonical appearance. We implement this in the Dynamic Steerable Frame Network, that dynamically estimates the transformations of filters, conditioned on its input. The derived method presents a hybrid of Dynamic Filter Networks and Spatial Trans- former Networks that can be implemented in any convolutional architecture, as we illustrate in two examples. First, we illustrate estimation properties of steerable frames with a Dynamic Steerable Frame Network, compared to a Dynamic Filter Network on the task of edge detection, where we show clear advantages of the de- rived steerable frames. Lastly, we insert the Dynamic Steerable Frame Network as a module in a convolutional LSTM on the task of limited-data hand-gesture recog- nition from video and illustrate effective dynamic regularization and show clear advantages over Spatial Transformer Networks. In this paper, we have laid out the foundations of Frame-based convolutional networks and Dynamic Steerable Frame Networks while illustrating their advantages for continuously transforming features and data-efficient learning."}, {"section_index": "1", "section_name": "INTRODUCTION", "section_text": "For images, as well as any other sensory data, convolutional networks are typically learned fron individual pixel values. Using them as a basis of the learned parameters is the standard approach fo almost all CNNs. In this paper, we argue, that the pixel basis is not necessarily the best choice for representing signals. We show, that suitable alternatives yield increased classification performance by replacement only, while such a replacement adds additional properties to the learned filters tha allow us to transform them under arbitrary pre-defined Lie groups.\nFrom our perspective, the pixel values span an orthogonal basis for the filters in the network (in every layer). Such a pixel basis is complete as it may represent an arbitrary vector in Rn by linear combination, where n is the dimensionality of the filter. In this paper we consider alternatives to this basis, both orthogonal bases, and non-orthogonal frames, arriving at superior expressiveness through steerable function spaces that allow us to transform filters locally and continuously, conditioned on their input.\nUtilizing the steerability properties of frames in practice, we propose Dynamic Steerable Frame Networks (DSFNs) that fill the gap between Spatial Transformer Networks (STNs) (Jaderberg et al. 2015) and Dynamic Filter Networks (DFNs) (De Brabandere et al.2016). STNs are not locally adaptive, thus they fail in many cases where it is not beneficial to transform the image globally as it would destroy discriminative information (multiple deformable objects, discriminative dynamic movements) or where global registration is performed as a preprocessing step (medical images)."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "DFNs are overcoming this restriction by locally transforming filters instead of globally transform ing the whole feature stack as STNs do. However, DFNs are black boxes and not data-efficient, as they introduce many unconstrained parameters. Such a behavior is undesirable when data is lim ited and interpretability is key. DSFNs are locally adaptive, interpretable and data-efficient. They overcome the weaknesses of both approaches by combining their strengths, as illustrated in multiple experiments.\nWe introduce the generalized notion of frames to CNNs that extend possible bases to learn from to non-orthogonal and overcomplete sets without loss in generalization. We show that many choices are possible, while overcomplete, non-orthogonal bases consistently outperform the pixel basis when applied to a ResNet (He et al.||2016) for image classification, as illustrated on Cifar-10+. We derive the Dynamic Steerable Frame Networks, based on the notion of steerable frames, that can locally adapt the filters in every feature map, conditioned on the input. We illustrate the strength of the approach in an edge detection task, where it outperforms a Dynamic Filter Network. We further show in a limited data video classification task, that Dynamic Steerable Frame Networks improve classification performance over Spatial Transformer Networks when global invariance is not desir- able.\nFrames are a natural generalization of orthogonal bases (Christensen]2003). In frame terminology an orthonormal basis is a Parseval-tight frame with unit norm. Every tight frame preserves the signa. norm and exhibits perfect reconstruction. Frames can be seen as a superset of orthogonal bases in the sense that every basis is a frame, but not the reverse, see figure 1. The advantage of considering frames over orthogonal bases is that intrinsic signal properties can be spelled out explicitly in the. new representation with the advantage, that these properties are directly accessible during learning. From an overcomplete representation, it will be more easily visible which part of the features is. robust and which part is sensitive to accidental noise variations..\nFigure 1: a) Is an orthonormal basis in R2, u1 and u2 are linearly independent and span the space of R2. A dot in this example represents a filter in a convolutional network with coefficients {v1, v2} b) A tight frame in R2. u1, u2 and u3 are linearly dependent. A dot in this example represents a con- volutional filter with coefficients {v1, V2, v3}. The frame is an overcomplete representation, again spanning R2 and again preserving the norm. Note that the set of filter coefficients as represented by the dot is not unique. Thus even if one v is obstructed by noisy updates or measurements, the filter may still be robust.\nWe argue that suitable frame bases are beneficial when representing sensory data compared. to the commonly used pixel basis.. Exploiting the transformation properties of frames further, we derive Dynamic Steerable. Frame Networks that are able to continuously transform features locally and fill the gap. between Spatial Transformer Networks and Dynamic Filter Networks.. Dynamic Steerable Frame Networks learn to separate pose and feature. This enables the. network to be locally equivariant or invariant with respect to certain feature poses, or even to perform in network quasi data-augmentation, while only the inputs and the backpropa. gated error signals determine which and to what extent these are applied.\nU1 b) U1 a) U2 1 U3 U2\nU1 b) U1 a) U2 U3 U2\nIn a standard convolutional network. a filter kernel is a linear combination over the standard basi for l2(N). The standard basis is composed from a delta function for every dimension and W; is the ith. filter of the network with parameters w? :\ne1 ={1,0,0,...,0} e2 ={0,1,0,...,0} en ={0,0,0,...,1} N Wi = n=1\nN Wi= n=1\nwhere u . are again the filter coefficients being learned\nIn practice for CNNs working on images we investigate derived bases from steerability requirements orthogonal polynomials, Framelets and members of the Gaussian derivative family. See figure 2|fo. a selection of frames\nFigure 2: An illustrative plot of multiple 3x3 spanning sets: a) Pixel-basis, b) Orthogonal Polyno mial, c) Non-orthogonal Frame. Note the increased symmetries in b) and c)..\nA pleasant property of many frames is steerability (Unser & Chenouard 2013] Hel-Or & Teo1 1998 Michaelis & Sommer1995), the power of a function to represent transformed versions of itsel by linear combination. The advantage of steerability in CNNs working on images is the ability to produce infinitely many transformed variants of a visual feature f(x, y) E R2 -> R from its canonical appearance.\nTo achieve this goal we cast these variations as the result of the action of a family of transformation g() on the canonical features f(x, y), where t E Rk parametrizes these k-parameter transforma tions. If the problem at hand requires the distinction between multiple unknown poses of the sam feature in a typical CNN they all have to be computed exhaustively to determine if a particular pos is present or not. Things go out of hand when the search space is a continuous transformation group such as the Lie group of affine transformations, requiring k -> oo number of feature maps which is computationally intractable or requires expensive searches over all possible transformations (Gen & Domingos2014). One way out is to coarsely sample a few equally spaced points on the equiv ariant transformation manifold or to restrict the space to a smaller group (Cohen & Welling2016 Dieleman et al.]2016). What remains, however, is that the number of resulting feature maps fo more general groups quickly becomes infeasible. An elegant way to overcome these limitations is the concept of steerability by (Freeman & Adelson1991;Perona1992Unser & Chenouard2013 which is taken as inspiration here.\nIn this work, we focus on Lie groups. Transformations g() over a range constitute a Lie group if they are closed under composition, they are associative, they are invertible, there exists an identity element, and their maps for inverse and composition are infinitely differentiable (Hel-Or & Teo. 1998). Teo and colleagues (Teo & Hel-Or.[1998) have given the following definition..\nWithout loss of generalization the orthonormal standard basis can be replaced by a frame to include non-orthogonality, overcompleteness, increased symmetries or steerability into the representation Changing from the pixel to an arbitrary frame is as simple as replacing the pixel basis en with a frame of choice with elements vn as follows:\na) b) c)\n. b\nDefinition 1 (Steerability): A function f(x, y): R2 -> R is steerable under a k-parameter Lie transformation group G if any transformation g() E G of f can be written as a linear combination of a fixed, finite set of frame functions $m(x, y):\nM g(T)f(x,y) = Bm(T)$m(x,y) = BT(T)(x,y m=1\nA function steerable under a k-parameter Lie group is capable of representing infinitely many states of a particular set of transformations. In many cases, only a finite set of frame functions is needed to represent these. In CNN terms, this means that a limited number of feature maps are sufficient to represent complete continuous transformation groups when the frame functions and the steering functions are chosen appropriately. Finding appropriate frame functions is the biggest challenge in steering arbitrary functions over arbitrary Lie groups.\nTo study the action of a Lie group G on a function we use the close relation between the Lie group. and its tangent space. The Algebra's tangent space spanned by the group's infinitesimal generators The differential operators of the group action are obtained by computing the derivative of the group. action with respect to its parameters at the identity element. A Lie Algebra can be considered as. an \"infinitesimal' Lie group. If the group is simply connected, the group action on a visual feature. f (x, y) can be obtained via the exponential map (Teo & Hel-Or1998):\nwhere L, are the group's infinitesimal generators and I is the identity element. This implies that one can compute the Taylor expansion with respect to the desired transformation group parameters to obtain elements of the group. If a finite frame set is equivariant towards the desired transformation group (it contains the orbit of the function to be steered), the series expansion yields linearly depen- dent elements after a finite number of steps. Then the frame is globally steerable under the desired transformation group. If this is not the case, as for example when scaling a Gaussian function, a fi- nite frame set is only sufficient to accurately steer the function over a bounded interval, the function is locally steerable, but not globally.\nWhen training a CNN the functions represented by each feature naturally change from update tc update. It is desirable to separate the frame functions from the effective features as learned by the network. In such a Structured Receptive Fields Network (RFNN) (Jacobsen et al.2016), each filter. parameters are not its mere pixel values, but the coefficients weighting the sum over a fixed frame set. Thus, analogous to equation[1] every effective filter W,(x, y) has the following form:.\nWi(x,y) =wjv1+wv2+...+wnUn\nwhere vn. denotes the nth. element of the frame\nN g(T)Wi(x,y) =wng(T)Un n=1\nAnd by substituting according to equation|2|it follows:\n1 eTiLi=I+TiLi+ 2!\nTo be able to separate a features pose from its canonical appearance, we are interested in a steerabl version of an arbitrary filter W,(x, y) under a k-parameter Lie group. From|5|follows:.\nN M g(r)Wi(x,y)= Bm(T)$m(x,y) n n=1 m=1\nThus it is sufficient to determine the group action on the fixed frame by steering it to separate. the canonical feature itself from its k-parameter variants, i.e. v?. govern the weight of each frame coefficient to form a feature W,(x, y) and m are the steering functions governing the transformation of g() acting on W,(x, y) as a whole. From now on learning and transforming features amounts to. a point-wise multiplication of frame coefficients with cos, sin and exp activation functions, which is suitable for learning in a CNN.."}, {"section_index": "3", "section_name": "2.4 DERIVING THE FRAME AND STEERING FUNCTIONS", "section_text": "Now the problem is reduced to finding a suitable frame as a function space underlying the learne filters. There are many approaches to derive a function space that is closed under the desired trans. formation group and as we show, many options give rise to bases that work considerably well wher inserted into state-of-the-art CNNs. The most straightforward way is to derive it from the group's in finitesimal generators, for brevity we refer the interested reader to (Hel-Or & Teo||1998) and directly cite some derived equivariant function spaces from the paper..\nSteerable Function Spaces\nelonlpacc x,y Translation xPyeQx+y x,y Scaling xyPln(x)Pln(y)9 Rotation & Uniform Scaling roln(r)Peik0 x,y Translation & x,y Scaling & Rotation rP\nTable 1: Examples of function spaces closed under various non-Abelian multi-parameter groups, as derived in (Hel-Or & Teo 1998). They can readily be used as a frame for CNNs by the procedure we derive here.\nOnce a frame is chosen, we can simply check if it is closed under the given transformation group b verifying for each generator L, that:\nTo arrive at a practical solution, we have to consider the problem that locally bounded functions car not be steered globally with a finite steerable frame. To achieve a suitable approximation for our. case, we separate scaling into two parts, an inner {x, y} and an outer scale a, where a stands for. aperture. The inner scale can directly be steered via the above derivation and represents the slope. of the local measurement taken by a filter, while the outer scale represents the size and shape of. the filters receptive field. To achieve anisotropic receptive fields, we propose to first steer the scale. at every pixel and steer the derived function space on this non-uniformly scaled grid, resulting in. locally deformable receptive fields. Due to associativity of convolution, we can combine steering the derived function space and the receptive field scale into one operation. In this work, we use a second. order approximation of the Gaussian that is capable of giving a good approximation to common. CNN receptive field sizes 3x3, 5x5 and 7x7. For scaling over larger ranges, we recommend the. spectral decomposition approach (Koutaki & Uchimura2014)."}, {"section_index": "4", "section_name": "2.5 DYNAMIC STEERABLE FRAME NETWORKS", "section_text": "Estimating the local pose of a feature from a steerable function space is analytically intractable ir the case of most multi-parameter groups. In this paper, we introduce the Dynamic Steerable Frame Network that combines the advantages of steerable function spaces with the power of neural net work function estimators, by estimating pose parameters from a function space equivariant unde the transformation group at hand. Specifically, our architecture is inspired by the recently intro duced Dynamic Filter Networks (De Brabandere et al.|2016). The Dynamic Filter Network (DFN generates one feature per location in a feature map, which boils down to a locally connected convo lution layer, for which the parameters are generated by a different network that estimates them fron the input, yielding a different filter kernel for every location in the input.\nxPy1exx+y xyPln(x)Pln(y) raln(r)Peik0 n xPy9\nx,y Translation. x,y Scaling xyPln(x)Pln(y)9 Rotation & Uniform Scaling. roln(r)Peik0 x,y Translation & x,y Scaling & Rotation\nL,(x,y) = B,(x,y)\nA(T) = eTkBk . T1B\nPose Tk * {W1, ...,Wn} 1 B(Tk)\nFigure 3: The Dynamic Steerable Frame Network. The network transforms an input image to a. steerable frame (here an example with 3 frame functions) and estimates the local feature pose at. each location in this equivariant space with a small pose estimating network. Then it outputs a set of. pose coordinates Tk, that are dependent on the group parametrization chosen. They are inserted into. the matrix of steering equations () and applied to the frame , yielding the locally steered frame.. In the same operation, we integrate the weights wn, that govern the feature maps canonical feature. appearance, these are the weights learned by a normal CNN. The Dynamic Steerable Frame Network. can decide to commute with a set of poses, to be invariant to them, to only look for certain poses or to act like a normal CNN, where each feature map has one pose and one canonical appearance. assigned to itself. This is only determined by the input data and the backpropagated error signals.."}, {"section_index": "5", "section_name": "The DFN takes the form", "section_text": "where Fx,y are generated by another network from the input. We propose the Dynamic Steerable. Frame Network, where the parameters 0 that condition the filter are pose transformation parameters. of the steerable function space, estimated from the input, similar to how it is done in the Spatial. Transformer Networks, just that in our case we aim for locally adaptive filters. The filters Fx,y share. the same set of weights in the whole feature map, so they represent the same canonical appearance. everywhere. While their local pose is dynamically estimated by a Pose-Generating Network that takes the form:\nThe method consists of two parts: i) A Pose-Generating network estimating local pose parameters of a feature conditioned on the input from a steerable input space. ii) A Dynamic Filtering mechanism, convolving transformed versions of a feature with every location in the input feature map, based on the estimates of the pose generating network. Due to linearity of convolution, we can first perform a transformation of the input into the steerable frame space and in this space we perform i) and ii) as point-wise multiplications."}, {"section_index": "6", "section_name": "3 RELATED WORK", "section_text": "Steerable Filters is a concept established early for signal processing. Initially introduced by (Free man & Adelson!1991), the concept was extended to the Steerable Pyramid by (Simoncelli & Free man1995) and further extended to a Lie-group formulation by (Hel-Or & Teo][1998] Michaelis & Sommer1995). Further, steerability has recently been extended to tight frames, presenting Simon celli's Steerable Pyramid and multiple other Wavelets arising as a special case of the non-orthogona Riesz transform (Unser & Chenouard]2013). Steerable pyramids have been applied to CNNs as a pre-processing step (Xue et al.]2016), but have not yet been learnable. We incorporate steerabl frames in CNNs to increase their de facto expressiveness and to allow them to learn their configura tions, rather than picking them a priori.\nConvolutional Networks with alternative bases have been proposed with various degrees of flexi. bility. A number of works utilizes change of basis to stabilize training and increase convergence behavior (Rippel et al.]2015} Arjovsky et al.]2015). Another line of research is concerned with. complex-valued CNNs, either learned (Tygert et al.]2016), or fully designed like the Scattering. networks (Bruna & Mallat2013fOyallon & Mallat2015).\nPose Ck * {W1, ..., Wn} (Tk)\nO(x,y) = Fx,y(I(x,y))\nr(x,y) = (I(x,y))\nThus, the canonical appearance is translation invariant, but its geometrical pose is not. In terms of equation5 this means the set of w is fixed, but the frame vn. pre-defined k-parameter group with parameters t. See figure5|for an illustration..\nScattering, as well as the complex-valued networks, rest upon a direct connection between the sig nal processing literature and CNNs. Inspired by the former, Structured Receptive Field Networks are learned from an overcomplete multi-scale frame, effectively improving performance for small datasets due to restricted feature spaces (Jacobsen et al.] 2016). Also related is the work on Group- equivariant CNNs (Cohen & Welling2016) and Cyclic Pooling (Dieleman et al.]2016), where equivariance towards the dihedral group is theoretically guaranteed, yielding increased accuracy. In spired by CNNs learned from alternative bases, we introduce the general principle of Frame-based convolutional networks that allow for non-orthogonal, overcomplete and steerable feature spaces.\nTo show the validity of general frame representations, we compare different bases in a state-of-the. art pre-activation deep residual network architecture (He et al.||2016) on the Cifar-10+ (Krizhevsky & Hinton2009) dataset with moderate data augmentation of crops and flips.\nError on Cifar10+\nMethod Pixel Image Frame Naive Frame ResNet-20 7.85% 7.61% 8.97% ResNet-56 6.68% 6.08% 7.30% ResNet-110 5.84% 5.34% 6.96% Densenet K12 L40 5.28% 4.99% 6.39% Densenet K12 L100 4.16% 3.78% 5.21%\nWe evaluated our approach on multiple networks and network sizes. The setup used for the ResNet. is as described in (He et al.] 2016). The batch size is chosen to be 64 and we train for 164 epochs. with the described learning rate decrease. The ResNet architectures used are without bottlenecks. having 20, 56 and 110 layers. For the Densenets we follow (Huang et al.2016) and evaluate on the. K=12 and L=40, and the K=12 and L=100 models. We run our experiments in Keras (Chollet2015] and Tensorflow (Abadi et al.|2016). In the first experiment, we run the models on the standard pixel basis to get a viable baseline. Secondly, we replace the pixel-basis with widely-used frames that. take natural image statistics into account, namely non-orthogonal, overcomplete Gaussian deriva-. tives (Florack et al.]1992) and non-orthogonal framelets (Daubechies et al.]2003) in an alternating fashion, yielding superior performance compared to the pixel-basis by replacement only..\nAnother way to impose structure onto CNN representations and subsequently increase their data. efficiency is to incorporate explicit geometrical transformations into them. Either by learning trans-. formation operators and group representations (Cohen et al.]2014] Wang et al.]2009). Or by pre defining the possible transformations, as done in Transforming Autoencoders (Hinton et al.] 2011). which map their inputs from the image to pose space through a neural network. The Spatial Trans-. former Networks (Jaderberg et al.]2015) learn global transformation parameters in a similar way. while applying them to a nonlinear co-registration of the feature stack to some learned pose. This. yields especially high performance on tasks where centering the objects is beneficial. Dynamic Fil-. ter Networks move one step further and estimate filters for each location, conditioned on their input. These approaches are all dynamic in a sense that they condition their parameters on the input appear-. ance. We combine the idea of Dynamic Filter Networks with explicit pose prediction into Dynamic Steerable Frame Networks that can estimate poses from continuous input space, conditioned on the. input. As such, we overcome the difficulty of estimating local pose, while being able to separate. pose and feature learning globally.\nTable 2: Results on Cifar10 with moderate data-augmentation (crops/flips) with the recently intro. duced pre-activation Residual network and Densenet with the standard pixel-basis, a steerable frame basis designed for natural images and the naive steerable xPyq frame from table|3|that does not take. natural image statistics into account. The natural image statistics based frame outperforms the pixel. basis consistently, while the naive frame consinstently performs about 1% worse than the baseline.. highlighting the benefit of a frame suitable for the type of input data..\nWe also show that the naive xPyq frame (see table[1) performs consistently worse than the other two choices, as it does not take natural image properties into account, while it is important to mention that this 1% performance decrease also comes with additional properties that might be highly beneficial in particular tasks. We have also found orthogonal polynomials to not work very well (around 3% performance decrease), which is in line with our expectation that suitable frames should take natural image statistics into account. 2D frames are generated from 1D functions via the following generating process:\nThe results are reported in table[2] The fact that the pixel-basis can be replaced by steerable frames and performance even improves when the frame is chosen well, is remarkable, as this means every. filter in the CNN enjoys additional properties, while performance improves in the standard setting. already and finding suitable frames is not more expensive than running the same smallest CNN as. many times as one has frames to choose from, as the performance we observed was consistent across multiple mode1 sizes. Frame-based CNNs run at the same runtime as vanilla CNNs.."}, {"section_index": "7", "section_name": "4.2 DYNAMIC STEERABLE FRAME NETWORKS", "section_text": "In this section we report two experiments. The first experiment is an edge detection task, highlight. ing the difference between our approach and multiple baselines in a fine-grained pixel-wise labeling. task. In the second experiment, we apply a 2D convolutional LSTM on a small hand gesture recog. nition video dataset to illustrate how the Dynamic Steerable Frame Network regularizes the mode effectively and to illustrate its benefits over Spatial Transform Networks..\nThe model used in both experiments is learned from a steerable Gauss-Hermite frame. The Dy namic Steerable Frame Network consists of three processing steps. 1) Change to frame space on the input, 2) the Pose-Generating network estimates the pose from this transformed input, outputting a set of pose variables for each location in the image. 3) the steering functions derived in section 2.4 are applied to these pose variable maps and effectively act as nonlinear pose-parametrized activa- tion functions that regularize the Pose-Generating network to output an explicitly interpretable pose space. Finally, a 1x1 convolution layer is applied to the already transformed output maps, represent- ing the weights wn, governing the canonical appearance of the ith feature map, see also figure5 Dynamic Steerable Frame Networks run at the same computational cost as vanilla Dynamic Filter Networks."}, {"section_index": "8", "section_name": "4.2.1 EDGE DETECTION", "section_text": "In this experiment, we compare a Dynamic Filter Network (De Brabandere et al. 2016) baseline with an autoencoder and a Dynamic Steerable Frame Network on the task of edge detection. Th problem is formulated as a pixel-wise classification task and reported is the root mean-squared erroi on an unseen test set. The labels are the edges. The dataset is infinite, as we produce random blobs and create the edge labels with a standard scikit image function. The standard DFN can freely learr an input layer with 2 filters and 3 subsequent 1x1 layers that can non-linearly recombine the inputs whereas the Frame DFN receives a steerable frame as an input, allowing it to leverage the fine grained orientation information without the need to learn it. The Dynamic Steerable Frame Networl has the exact same architecture as the DFN but is geometrically regularized on its output as can be seen in figure[5] as an input it receives a first order Gauss-Hermite frame that can be steered globall towards rotation and locally towards scale.\nFrame = {v0, V1, V2, v3} {vo, V1, V2, v3}T\nLocation varying methods are clearly superior in this task, compared to the location invariant au toencoder. The DFN increases its performance substantially when getting the steerable frame as an input, indicating its inability to learn a continuously transforming frame by itself. Finally, the Dy- namic Steerable Frame Network clearly outperforms all baselines due to its ability to continuously transform its filters in a well-regularized manner. As an extra, we get the local feature pose for free from the output of the DSFN, the baseline has no notion of an explicit pose parameter, see figure 4.\na) OO a 0 Method RMSE Autoencoder 18.034 DFN 5.669 Frame DFN 1.554 *. DSFN 0.778\nFigure 4: Results on the edge detection task. Top is an illustration of a test image, bottom on sample from the actual infinite dataset, reported is root mean squared error. Autoencoder denote a vanilla location invariant autoencoder. DFN denotes the plain Dynamic Filter Network, Fram DFN denotes a DFN whos input is a frame, DSFN denotes the Dynamic Steerable Frame Network a) is the input, b) is the label, c) the prediction and d) the angular pose variable. d) is an outpu we get for free when training DSFNs, while a DFN has no notion of interpretable angle variables Location varying methods clearly outperform the static autoencoder, while learning the DFN from steerable frame increases performance again substantially. The DSFN substantially outperforms al other methods due to its continuously transforming input and output space."}, {"section_index": "9", "section_name": "4.2.2 SMALL SCALE VIDEO CLASSIFICATION", "section_text": "To show the ability of the Dynamic Steerable Frame Network to effectively regularize in dynamic settings where poses play an important role and where Spatial Transformer Networks do not work well, we apply it on the task of Hand-Gesture Recognition. Namely, on the Cambridge Hand-Gesture dataset (Kim & Cipolla2009), consisting of 9 classes of hand movement and poses in 900 videos we use 750 for training, 50 for validation and 100 for testing. The dataset is very small and contains classes where global movement plays an important role and thus provides a good test bed to show effectiveness of the DSFN regularization ability compared to Spatial Transformers.\nTable 3: Results on the Cambridge Hand-Gesture Recognition dataset, to illustrate the effectiveness of pose regularization provided by the Dynamic Steerable Frame Network. Adding the DSFN mod- ule to the convLSTM drastically improves performance. Increasing the capacity of the baseline to two layers, does not make up for the difference in performance, while adding the STN to the convL- STM decreases performance significantly, as the STN does not manage to learn meaningful global transformations that do not remove the class-specific information content. This is further substan- tiated by an increased performance when removing the ability to shear and translate the input from the STN. The DSFN outperforms all other approaches while only adding 2k free parameters to the baseline.\nOur baseline model is a convolutional LSTM with 10 output feature maps, batch normalization and a. dense layer for classification. As a second baseline, we increase the capacity of the model by adding. a second LSTM layer and a second batch normalization step. We combine two instances of a Spatial. Transformer Network with a convolutional LSTM, one that can perform full affine transformations. and one that is restricted to rotation and scaling. The DSFN module is applied to the input layer of the smaller model with 4 output feature maps. The setup of the steerable frame used in this model. is a Gauss-Hermite frame\nThe steerability is uniquely parametrized as: {B,Be}. Allowing for scaling and rotation. Both Spatial Transformer Networks do not manage to learn useful warps of the input image and therefore decrease performance of the baseline. The affine model only manages to correctly classify multiple instances of a static class that has no movement information related to its label, while the rot/scale model increases performance, but still does not manage to learn useful scalings or rotations. The DSFN manages to learn locally rotation and scale invariant filters, that follow the boundaries and other features across the video as desired. Results are reported in Table 3.\nconvLSTM 1 Layer 2 Layer rot/scale-DSFN rot/scale-STN affine-STN # Params 905k 913k 907k 971k 1037k Accuracy 35.42% 39.31% 62.18% 21.34% 12.21 %\nFor visualizations of the learned transformations, see Appendix A. The results illustrate the effec tiveness of the DSFN to regularize the LSTM on a small-scale task where mostly local invariance is desired, but global invariance destroys most of the class-specific information. The DSFN improves performance over the baseline by about 22%, while the Spatial Transformer Network decreases performance by about 15%, or even to random in the full-affine case."}, {"section_index": "10", "section_name": "5 DISCUSSION", "section_text": "The insight that multiple frames can be considered as viable spanning sets for CNN representation leads us to steerable frames whose properties we exploit explicitly in our derived Dynamic Steerable Frame Networks, such that they can readily be accessed during training. The proposed method i a hybrid of Dynamic Filter Networks and Spatial Transformer Networks, enabling locally adaptiv filtering with geometrical constraints.\nWe illustrate the effectiveness of the approach on an edge detection task, that requires fine-grained pixel-wise labeling, where Dynamic Steerable Frame Networks outperform a standard Dynamic Fil- ter Network and an autoencoder baseline. Further, we illustrate the ability of the Dynamic Steerable Frame Network to regularize recurrent networks in a small-data video classification scenario where Spatial Transformer Networks fail to learn meaningful transformations."}, {"section_index": "11", "section_name": "ACKNOWLEDGEMENTS", "section_text": "We would like to thank Edouard Oyallon and Taco Cohen for insightful comments and discussions"}, {"section_index": "12", "section_name": "REFERENCES", "section_text": "Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado. Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.\nMartin Arjovsky, Amar Shah, and Yoshua Bengio. Unitary evolution recurrent neural networks arXiv preprint arXiv:1511.06464, 2015\nJoan Bruna and Stephane Mallat. Invariant scattering convolution networks. IEEE transactions on pattern analysis and machine intelligence, 35(8):1872-1886. 2013.\nFrancois Chollet. Keras. https: //github. com/fchollet/keras. 2015.\nOle Christensen. An introduction to frames and Riesz bases, volume 7. Springer, 2003\nIngrid Daubechies. Bin Han, Amos Ron. and Zuowei Shen. Framelets: Mra-based constructions oi wavelet frames. Applied and computational harmonic analysis, 14(1):1-46, 2003\nWe have introduced the notion of Frame-based convolutional networks. Our experiments illustrate that a simple replacement of the standard basis by a frame suitable for natural images leads tc increased performance\nFuture work is to apply the model to other problem domains like egocentric video, robotics appli. cations, as well as volumetric medical imaging videos of moving organs. We expect our Dynamic Steerable Frame Network approach to be beneficial in any problem where spatiotemporal continuity data-efficiency, or interpretable pose spaces are key..\nRobert Gens and Pedro M Domingos. Deep symmetry networks. In Advances in neural information processing systems, pp. 2537-2545, 2014.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residua networks. arXiv preprint arXiv:1603.05027, 2016.\nGeoffrey E Hinton, Alex Krizhevsky, and Sida D Wang. Transforming auto-encoders. In Interna tional Conference on A rti. ifeiaNeuraNef -51 Snringer. 2011\nGao Huang, Zhuang Liu, and Kilian Q Weinberger. Densely connected convolutional networks arXiv preprint arXiv:1608.06993, 2016\nJorn-Henrik Jacobsen, Jan van Gemert, Zhongyu Lou, and Arnold W.M. Smeulders. Structur receptive fields in cnns. 2016.\nMax Jaderberg, Karen Simonyan, Andrew Zisserman, et al. Spatial transformer networks. In Ad vances in Neural Information Processing Svstems. p. 2017-2025. 2015\nAlex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009\nMarkus Michaelis and Gerald Sommer. A lie group approach to steerable filters. Pattern Recognitior Letters, 16(11):1165-1174, 1995.\nPietro Perona. Steerable-scalable kernels for edge detection and junction analysis. In ECCV, pp 3-18. Springer, 1992\nEero P Simoncelli and William T Freeman. The steerable pyramid: a flexible architecture for multi scale derivative computation. In 1CIP (3), pp. 444-447, 1995\nPatrick C Teo and Yacov Hel-Or. Lie generators for computing steerable functions. Pattern Recog nition Letters. 19(1):7-17. 1998\nMark Tygert, Joan Bruna, Soumith Chintala, Yann LeCun, Serkan Piantino, and Arthur Szlam. A mathematical motivation for complex-valued convolutional networks. Neural computation, 2016.\nLuc MJ Florack. Bart M ter Haar Romeny, Jan J Koenderink, and Max A Viergever. Scale and the differential structure of images. Image and Vision Computing, 10(6):376-388, 1992\nYacov Hel-Or and Patrick C Teo. Canonical decomposition of steerable functions. Journal o Mathematical Imaging and Vision, 9(1):83-95, 1998\nJimmy Wang, Jascha Sohl-Dickstein, and Bruno Olshausen. Unsupervised learning of lie grouy operators from image sequences. In Frontiers in Systems Neuroscience. Conference Abstract. Computational and systems neuroscience, 2009.\nTianfan Xue, Jiajun Wu, Katherine L Bouman, and William T Freeman. Visual dynamics: Proba bilistic future frame synthesis via cross convolutional networks. arXiv preprint arXiv:1607.02586. 2016.\nAPPENDIX A VISUALIZING DSFN AND STN TRANSFORMATIONS\nDSFN STN\nFigure 5: Visualizations of the learned transformations by the Dynamic Steerable Frame Network (DSFN top) and the Spatial Transformer Network (STN bottom) on the hand-gesture recognition. dataset. The STN zooms and rotates the hands arbitrarily and apparently removes important infor-. mation content thereby, leading to low classification accuracy. The DSFN acts locally and adaptively. filters the hands in multiple ways. Note that the DSFN did not learn fully rotation invariant filters. in all 4 cases, but in 3 cases produces different filter responses for different sides of the hand. How-. ever. it does follow the contours of the hand and segments the borders from the background. This indicates that full rotation invariance is not suitable for this task. This would be hard to assess if one had to choose the degree of invariance a priori, while the DSFN has the means to learn the necessary. amount of local invariance"}, {"section_index": "13", "section_name": "APPENDIX B EQUIVARIANCE PROOF & STEERING EQUATION DERIVATION", "section_text": "To prove that a frame is equivariant with respect to the action of a group transformation, determined by its generator L,, we simply have to show that:.\nWhere B; is some n n matrix\nIn case of the Hermite polynomials (here considered up to second order), we have\nIN\nL,(x,y) = B;(x,y)\n(x,y)={1,x,y,x2-1, xy,y-1}\nTo verify that these functions span an equivariant function space with respect to rotation, we apply the generator of rotations to the frame and verify that equation[13|holds. The generator of rotations\nn the plane is given by Lr. +y applied to each frame element, we get: d2 dr\n0 1 y x -x = Br y Lr(x,y) = 2xy x2 1 x2 + y2 xy -2xy 1 y2\nIt is straightforward to solve this linear system and obtain the 6 6 matrix Br:\n[0 0 0 0 0 0 0 0 1 0 0 0 0 -1 0 0 0 0 Br = 0 0 0 0 2 0 0 0 0 -1 0 1 0 0 0 0 -2 0\nThus, we have proven that the function space is closed under the action of the group and the Hermite polynomials constitute an equivariant function space with respect to rotation.\nSubsequently, the exponential map directly yields the steering equations collected in the interpola tion matrix A0, that can rotate the whole frame by 0:.\n{01,sx,Sy,02}(x,y) = A{01,sx,Sy,02}(x, Y 001Br (x,y 2D Us\nWill be added to final manuscript\n= A(x,y), 1 0 0 0 0 0 1 0 cos 0 sin 0 0 0 0 x 0 sin 0 cos 0 0 0 0 y 0 0 0 1 + cos 20 sin 20 1|2 1 cos 20 2 2 2 x2 7 0 0 0 sin 20 cos 20 sin 20 xy 12 1 2 [0 1|2 0 0 cos 20 sin 20 + 1 cos 20 2 y 1 2 2 2\n(x, y) is the frame rotated by some angle 0. Combining this result with equation7I gives us the possibility to rotate any learned feature by arbitrary and continuous angles 0. The whole procedure is completely analogous for any other Lie group transformation. Further, k-parameter transformation groups can be composed according to equation 9|from smaller groups. Here an example of the general linear group of rotation, anisotropic scalings and skew:"}] |
r1Aab85gg | [{"section_index": "0", "section_name": "OFFLINE BILINGUAL WORD VECTORS, ORTHOGONAI TRANSFORMATIONS AND THE INVERTED SOFTMAX", "section_text": "{samuel.smith, steven.hamblin, nils.hammerla}@babylonhealth.con dt382@cam.ac.uk\nUsually bilingual word vectors are trained \"online\".. Mikolov et al. (2013a showed they can also be found \"offline'; whereby two pre-trained embeddings. are aligned with a linear transformation, using dictionaries compiled from expert. knowledge. In this work, we prove that the linear transformation between two. spaces should be orthogonal. This transformation can be obtained using the singu. lar value decomposition. We introduce a novel \"inverted softmax\"' for identifying. translation pairs, with which we improve the precision @1 of Mikolov's original. mapping from 34% to 43%, when translating a test set composed of both common. and rare English words into Italian. Orthogonal transformations are more robust. to noise, enabling us to learn the transformation without expert bilingual signal. by constructing a \"pseudo-dictionary\" from the identical character strings which. appear in both languages, achieving 40% precision on the same test set. Finally. we extend our method to retrieve the true translations of English sentences from a corpus of 200k Italian sentences with a precision @1 of 68%."}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "Monolingual word vectors embed language in a high-dimensional vector space, such that the simi. larity of two words is defined by their proximity in this space (Mikolov et al.|2013b). They enable. us to train sophisticated classifiers to interpret free flowing text (Kim2014), but they require inde. pendent models to be trained for each language. Crucially, training text obtained in one language. cannot improve the performance of classifiers trained in another, unless the text is explicitly trans lated. Increasing interest is now focused on bilingual vectors, in which words are aligned by thei. meaning, irrespective of the language of origin. Such vectors may drive improvements in machine translation (Zou et al.|2013), and enable language-agnostic text classifiers (Klementiev et al.]2012) They can also be higher quality than monolingual vectors (Faruqui & Dyer2014).\nBilingual vectors are normally trained \"online\"', whereby both languages are learnt together in a shared space (Chandar et al.2014] Hermann & Blunsom2013). Typically these algorithms exploit two sources of monolingual text alongside a smaller bilingual corpus of aligned sentences. This bilingual signal provides a regularisation term, which penalises the embeddings if similar words in the two languages do not lie nearby in the vector space. HoweverMikolov et al.[(2013a) showed that bilingual word vectors can also be obtained \"offline\". Two sets of word vectors in different languages were first obtained independently, and then a linear matrix W was trained using a dictionary to map word vectors from the \"source\" language into the \"target' language. Remarkably, this simple procedure was able to translate a test set of English words into Spanish with 33% precision.\nTo develop an intuition for these two approaches, we note that the similarity of two word vectors is defined by their cosine similarity, cos(0) = yf x/|yi||x,]. The vectors have no intrinsic meaning it is only the angles between vectors which are meaningful. This is closely analogous to asking a cartographer to draw a map of England with no compass. The map will be correct, but she does not know which direction is north, so the angle of rotation will be random. Two maps drawn by two such cartographers will be identical, except that one will be rotated by an unknown angle with respect to the other. There are two ways the cartographers could align their maps. They could draw"}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "Before SVD After SVD musica document paio document passaycinare reato cucinare documento aftindEgoking pffentgoking spectrumpasseggeri spectrum paio music trafficoe music past past fouple coumusica reato spettro passato roots traffico r9atici traffispettro traffice passenger passenger passeggeri documento\nFigure 1: A 2D plane through an English-Italian semantic space, before and after applying the SVD on the word vectors discussed below, using a training dictionary of 5ooo translation pairs. The examples above were not used during training, but the SVD aligns the translations remarkably well.\nthe maps together, thus ensuring that landmarks are placed nearby on both maps during \"training\" Or they could draw their maps independently, and then compare the two afterwards; rotating one map with respect to the other until the major cities are aligned. We note that the more similar the intrinsic geometry of the two maps, the more accurately this rotation will align the space.\nThe main contribution of this work is to provide theoretical insights which unify and enhance exist ing approaches in the literature. We prove that a self-consistent linear transformation between vectoi spaces should be orthogonal. Intuitively, the transformation is a rotation, and it is found using the singular value decomposition (SVD). The shared semantic space obtained by the SVD is illustrated in figure[1] We build on the work of|Dinu et al.[(2014), by introducing a novel \"inverted softmax\"' tc combat the hubness problem. Using the same word vectors, training dictionary and test set providec by Dinu, we improve the precision of Mikolov's method from 34% to 43%, when translating fron English to Italian, and from 25% to 37% when translating from Italian to English. We also presen three remarkable new results. First we exploit the superior robustness of orthogonal transformations by discarding the training dictionary and forming a pseudo-dictionary from the identical characte strings which appear in both languages. While Mikolov's method achieves translation precisions of just 1% and 3% respectively with this pseudo-dictionary, our approach achieves precisions of 40% and 34%. This is a striking result, achieved without any expert bilingual signal. Next, we form simple sentence vectors by summing and normalising over word vectors, and we obtain bilin gual sentence vectors by applying the SVD to a phrase dictionary formed from a bilingual corpus of aligned text. The transformation obtained aligns the underlying word vectors, achieving a translatior precision of 43% and 38%, en-par with the expert word dictionary above. Finally, we show that we can also use our bilingual word vectors to retrieve sentence translations; identifying the translatior of an English sentence from a bag of 200k Italian candidate sentences with 68% precision.\nn |yi-Wx|l2 min W i=1\nAfter training, any word vector in the source language can be mapped to the target by calculating Ye = Wx. The similarity between a source vector x and a target vector yt can then be evaluated by the cosine similarity cos(0te) = yf ye/|yt|ye|. Astonishingly, this simple procedure achieved 33% accuracy when translating unseen words from English into Spanish, using a training dictionary of\nOffline bilingual word vectors were first proposed byMikolov et al.(2013a). They obtained a small dictionary of paired words from Google Translate, whose word vectors we denote {yi, x }-1. Next,. they applied a linear transformation W to the source language and used stochastic gradient descent to minimise the squared reconstruction error,.\n5k common English words and their Spanish translations, and word vectors trained using word2ve on the WMT11 text datasets. Translations were found by a simple nearest neighbour procedure.\nWe note that the cost function above is solved by the method of least squares, as realised by Dinu et al.(2014). They did not modify this cost function, but proposed an adapted method of retrieving translation pairs which was more accurate when translating words from English to Italian.Faruqui & Dyer (2014) obtained bilingual word vectors using CCA. They did not attempt any translation tasks, but showed that the combination of CCA and dimensionality reduction improved the performance of monolingual vectors on standard evaluation tasks. CCA had previously been used to iteratively extract translation pairs directly from monolingual corpora (Haghighi et al.2008). More recently Xing et al.(2015) argued that Mikolov's linear matrix should be orthogonal, and introduced an approximate procedure composed of gradient descent updates and repeated applications of the SVD CCA has been extended to map 59 languages into a single shared space (Ammar et al.]2016), and non-linear \"deep CCA\" has been introduced (Lu et al.] 2015). A theoretical analysis of bilingual word vectors similar to this paper was recently published by|Artetxe et al.(2016).\nS 9 (Wxj) Yi:\nevaluates the similarity between the jth source word and the ith target word. The matrix W maps the source language into the target language. The largest value in a column of the similarity matrix gives the most similar target word to a particular source word, while the largest value in a row gives the most similar source word to a given target word. However we could also form a second similarity matrix S' = XQyT, such that the matrix Q maps the target language back into the source.\nXj: Qyi)\nWhen we map a source word into the target language, we should be able to map it back into th source language and obtain the original vector. x ~ WTy and y ~ Wx and thus x ~ WTWx. Thi expression should hold for any word vector x and thus we conclude that the transformation W shoul be an orthogonal matrix O satisfying OT'O = I, where I denotes the identity matrix. Orthogona transformations preserve vector norms, so if we normalise X and Y, then the matrix element Sij [yi|Ox;| cos(0) = cos(0). The similarity matrix S = YOXT computes the cosine similarit between all possible pairs of source and target words under the orthogonal transformation O.\nn yf Oxi, subject to OT O = I. max 0 i=1\nTo prove that a self-consistent linear mapping between semantic spaces must be orthogonal, we form the similarity matrix, S = YW XT. X and Y are word vector matrices for each language, in which each row contains a single word vector, denoted by lower case x and y. The matrix element\nalso evaluates the similarity between the jth source word and the ith target word. To be self consis tent, we require S' = ST. However ST = XwTyT, and therefore the matrix Q = wT. If W maps the source language into the target, then wT maps the target language back into the source.\nWe now infer the orthogonal transformation O from a dictionary {yi, xi}?=1 of paired words. Since we predict the similarity of two vectors by evaluating Sij = cos(0i), we ought to learn the trans- formation by maximising the cosine similarity of translation pairs in the dictionary,\nThe solution proceeds as follows. We form two ordered matrices Xp and Yp from the dictionary, such that the ith row of {Xp, Yp} corresponds to the source and target language word vectors of the ith pair in the dictionary. We then compute the SVD of M = YT Xp = UVT. This step is highly efficient, since M is a square matrix with the same dimensionality as the word vectors. U and V are composed of columns of orthonormal vectors, while is a diagonal matrix containing the singular values. Our cost function is minimised by O =- UVT. The optimised similarity matrix,\nS YUV Si 1\nThus, we map both languages into a single space, by applying the transformation VT to the source language and UT to the target language. We prove that this procedure maximises equation |6|ir the appendix. It was recently independently proposed by Artetxe et al.(2016), and provides a numerically exact solution to the cost function proposed by[Xing et al.(2015), just as the method of least squares provides a numerically exact solution to the cost function of[Mikolov et al.[(2013a)\nOur procedure did not use the singular values , but these values do carry relevant information. Al of the singular values are positive, and each singular value s; is uniquely associated to a pair ol normalised vectors u; and v; from the matrices U and V. Standard implementations of the SVD return the singular values in descending order. The larger the singular value, the more rapidly the mean cosine similarity of the dictionary decreases if the corresponding vectors are distorted. We car perform dimensionality reduction by neglecting the vectors {u,, v} which arise from the smallest singular values. This is trivial to implement by simply dropping the final few rows of UT and VT and we will show below that it leads to a small improvement in the translation performance."}, {"section_index": "3", "section_name": "2.3 THE SVD AND CCA", "section_text": "Our method is very similar to the CCA procedure proposed by|Faruqui & Dyer (2014), which can. also be obtained using the SVD (Press 2011). We first obtain the source dictionary matrix Xp and. subtract the mean from each column of this matrix to obtain X'p. We then perform our first SVD to obtain X'b, = QDx VT. We perform the same two operations on the target dictionary Yp to obtain. Yb = Wpy VT, and then perform another SVD on the product M' = QTWp = U''V'T. This. last step is identical to the alignment procedure we introduced above. Finally we obtain X' and Y' by. subtracting the mean value of each column in Xp and Yp, before computing a new pair of aligned representations of the full vocabulary, Qaligned = X'Vx'U', and Waligned = Y'VyyV'. Once again, we perform dimensionality reduction by neglecting the final few columns of U' and V'.\nEffectively, CCA is composed of two stages. In the first stage, we replace our word vector matrices. X and Y by two new vector representations Q = X'Vxx and W = Y'Vyy1. In the second stage, we apply the orthogonal transformations {U', V'} to align {Q, W} in a single shared space. To the authors, the first stage appears redundant. If we have already learned high-quality word. vectors {X, Y}, there seems little reason to learn new representations {Q, W}. Additionally, it is unclear why the transformations Vx ' and Vy y are obtained using only the dictionary matrices { Xp, Yp}, rather than using the full vocabularies {X, Y}..\nMikolov et al.(2013a) predicted the translation of a source word x; by finding the target word yi. closest to Wxy. In our formalism, this corresponds to finding the largest entry in the jth column of. the similarity matrix. To estimate our confidence in this prediction, we could form the softmax.\nTo learn the \"inverse temperature\"' , we maximise the log probability over the training dictionary\nIntuitively, rather than asking whether the source word translates to the candidate target word, we assess the probability that the candidate target word translates back into the source word. We then select the target word which maximises this probability. If the ith target word is a hub, then the\nPj-i- m\nL max ln(P 3 pairs ij\nThis sum should be performed only over valid translation pairs. Dinu et al.(2014) demonstrated that nearest neighbour retrieval is flawed, since it suffers from the presence of \"hubs\". Hubs are words which appear as the nearest neighbour target word to many different source words, reducing the translation performance. We propose that the hubness problem is mitigated by inverting the softmax, and normalising the probability over source words rather than target words.\n->i Sin"}, {"section_index": "4", "section_name": "2.5.1 IDENTICAL CHARACTER STRINGS", "section_text": "Our method requires a training dictionary of paired vectors, which is used to infer the orthogonal. map O and the inverse temperature , and also as a validation set during dimensionality reduction Typically this dictionary is obtained by translating common source words into the target language. using Google Translate, which was constructed using expert human knowledge. However most European languages share a large number of words composed of identical character strings. Words. like \"London', \"DNA\" and \"Tortilla\"'. It is probable that identical strings across two languages. share similar meanings. We can extract these strings and form a \"pseudo-dictionary\", compiled. without any expert bilingual knowledge. Below we show that this pseudo dictionary is sufficient to. successfully translate between English and Italian with high precision.."}, {"section_index": "5", "section_name": "2.5.2 ALIGNED SENTENCES", "section_text": "The Europarl corpus is composed of aligned sentences in a number of European languages (Koehn. 2005).Chandar et al.(2014) showed that such corpora can be used alongside monolingual tex. sources to learn online bilingual vectors, but to date, offline bilingual vectors have only been ob tained from dictionaries. To learn the orthogonal transformation from aligned sentences, we define the vector q of a source language sentence by a normalised sum over the word vectors present. q = , x/, xi. The vector w of a target language sentence is defined by a normalised sun. of word vectors yi. We view the aligned corpus as a dictionary of paired sentences {wi, qi}, from. which we form two dictionary matrices Wp and Q p. We obtain the transformation O from an SVD. on the matrix M = WT'Q D, and use this transformation to translate individual words in the test set..\nThis simple procedure embeds words and sentences in the same vector space. The sentence embed-. ding can be thought of as the \"average word\" that the sentence conveys. Intuitively, each aligned. sentence pair gives us weak information about a possible word pair in the dictionary. By combining. a large number of such sentence pairs, we obtain sufficient information to align the vector spaces and infer the translations of individual words. However, we will go on to show that this orthogonal transformation can be used, not only to retrieve the translations of words between languages, but also to retrieve the translations of sentences between languages with remarkably high accuracy..\nWe perform our experiments using the same word vectors, training dictionary and test dictionary provided byDinu et al.(2014 The word vectors were trained using word2vec, and then the 200k most common words in both the English and Italian corpora were extracted. The English word vec- tors were trained on the WackyPedia/ukWaC and BNC corpora, while the Italian word vectors were trained on the WackyPedia/itWaC corpus. The training dictionary comprises 5k common English words and their Italian translations, while the test set is composed of 1500 English words and their Italian translations. This test set is split into five sets of 300. The first 300 words arise from the most common 5k words in the English corpus, the next 300 from the 5k-20k most common words. followed by bins for the 20k-50k, 50k-100k, and 100k-200k most common words. This enables us to evaluate how word frequency affects the translation performance. Some of the Italian words have both male and female forms, and we follow Dinu in considering either form a valid translation.\nWe report results using our own procedure, as well as the methods proposed by Mikolov, Faruqui and Dinu. We compute results for Mikolov's method by applying the method of least squares, and results for Faruqui's method using Scikit-learn's implementation of CCA with default parameters. In both cases, we predict translations by nearest neighbour retrieval. We do not apply dimensionality\neorgiana.dinu/down\ndenominator in equation |12|will be large, preventing this target word from being selected. The vector a ensures normalisation. The sum over n should run over all source words in the vocabulary.. However to reduce the computational cost, we only perform this sum over ns sample words, chosen. randomly from the vocabulary. Unless explicitly stated, ns = 1500..\nTable 1: Translation performance using the expert training dictionary, English into Italian\nMikolov Dinu + inverted + dimensionality Precision CCA SVD et al. et al. softmax reduction @1 0.338 0.385 0.361 0.369 0.417 0.431 @5 0.483 0.564 0.527 0.527 0.587 0.607 @10 0.539 0.639 0.581 0.579 0.655 0.664\nTable 2: Translation performance using the expert training dictionary, Italian into English\nMikolov Dinu + inverted + dimensionality Precision CCA SVD et al. et al. softmax reduction @1 0.249 0.246 0.310 0.322 0.373 0.380 @5 0.410 0.454 0.499 0.496 0.577 0.585 @10 0.474 0.541 0.570 0.557 0.631 0.636\nreduction following CCA, to enable a fair comparison with our SVD procedure. We compute results for Dinu's method using the source code they provided alongside their manuscript. Their method uses 10k source words as \"pivots'; 5k from the test set and 5k chosen at random from the vocabulary By contrast, the inverted softmax does not know which source words occur in the test set.\nIn tables[1and2|we present the translation performance of our methods when translating the test set between English and Italian, using the expert training dictionary provided by Dinu. We evaluate Mikolov and Dinu's methods for comparison, as well as CCA- proposed by[Faruqui & Dyer (2014) All the methods are more accurate when translating from English to Italian. This is unsurprising given that some English words in the test set can translate to either the male or female form of the Italian word. In the fourth column we evaluate the performance of our SVD procedure with nearest neighbour retrieval. This already provides a marked improvement on Mikolov's mapping especially when translating from Italian into English. As anticipated, the performance of the SVD is very similar to CCA. In the following two columns we apply first the inverted softmax, and then dimensionality reduction to the aligned vector space obtained by the SVD. The hyper-parameters of these procedures were optimised on the training dictionary. Combining both procedures improves the precision @1 to 43% and 38% when translating from English to Italian, or Italian to English respectively. These results are a significant improvement on previous work. In table 3|we present the dependence of precision @1 on word frequency. We achieve remarkably high precision when translating common words. This performance drops off for rare words; presumably either because there is insufficient monolingual data to learn high quality rare word vectors, or because the linguistic similarities between rare words across languages are less pronounced.\nTable 3: Translation precision @1 from English to Italian using the expert training dictionary. We achieve 69% precision on test cases selected from the 5k most common English words in the ukWaC Wikipedia and BNC corpora. The precision falls for less common words.\nWord ranking Mikolov Dinu + inverted + dimensionality CCA SVD by frequency et al. et al. softmax reduction 0-5k 0.607 0.650 0.633 0.637 0.690 0.690 5-20k 0.463 0.540 0.477 0.510 0.580 0.610 20-50k 0.280 0.350 0.343 0.323 0.380 0.403 50-100k 0.193 0.217 0.190 0.200 0.230 0.253 100-200k 0.147 0.163 0.163 0.173 0.203 0.200\nIn the preceding section, we reported our performance using an orthogonal transformation learned on an expert training dictionary of 5k common English and Italian words. We now report our per- formance when we do not use this dictionary, and instead construct a pseudo dictionary from the list of words which appear in both the English and Italian vocabularies, composed of exactly the same character string. Remarkably, 47074 such identical character strings appear in both vocabularies There would be fewer identical entries for more diverse language pairs, but our main goal here is to demonstrate the superior robustness of orthogonal transformations to low quality dictionaries.\nWe exhibit our results in table4] where we evaluate our method (SVD + inverted softmax + dimen. sionality reduction), when translating either from English to Italian or from Italian to English. Ever when using this pseudo dictionary prepared with no expert bilingual knowledge, we still achiev. a mean translation performance @1 of 40% from English to Italian on our test set. By contrast. Mikolov and Dinu's methods achieve precisions of just 1% and 6% respectively. CCA also per. forms well, although it became significantly more computationally expensive when the vocabulary. size increased. Previously translation pairs have been extracted from monolingual corpora usin, CCA by bootstrapping a small seed lexicon (Haghighi et al.]2008).\nThe English-Italian Europarl corpus comprises 2 million English sentences and their Italian trans. lations, taken from the proceedings of the European parliament (Koehn2005). As outlined earlier. we can form simple sentence vectors in the word vector space by summing and normalising over the. words contained in a sentence. These sentence vectors can be used in two different tasks. First. we can use the Europarl corpus as a training dictionary, whereby the matrices Xp and Yp are formec. from the sentence vectors of translation pairs. By applying the SVD to the first 500k sentences ir. this \"phrase dictionary\", we obtain a set of bilingual word vectors from which we can retrieve trans. lations of individual words. We exhibit the translation performance of this approach in table[5 We. achieve 42.8% precision @1 when translating from English into Italian and 37.5% precision wher. translating from Italian into English, comparable to the accuracy achieved using the expert word dic. tionary on the same test set. It is difficult to compare the two approaches, since they require differen. training data. However our performance appears competitive with Bilbowa, a leading method fo. learning bilingual vectors online from monolingual corpora and aligned text (Gouws et al.]2015). We do not include results for CCA due to the computational complexity on a dictionary of this size.\nSecond, we can apply our orthogonal transformation to retrieve the Italian translation of an Englisl. sentence, or vice versa. To achieve this, we hold back the final 2ook English and Italian sentences from our 5o0k sample of Europarl, and attempt to retrieve the true translation of a given sentence. in this test set. We obtain the orthogonal transformation by performing the SVD on either the. expert word dictionary provided by Dinu, or on a phrase dictionary formed from the first 300k.\nTable 5: Translation performance, using the Europarl corpus as a phrase dictionary\nTable 5: Translation periormance, usin Ja us as a phrase dictionary English to Italian: Italian to English: Precision Mikolov et al. Dinu et al. This work Mikolov et al. Dinu et al. This work @1 0.234 0.313 0.428 0.19 0.224 0.375 @5 0.368 0.531 0.589 0.331 0.419 0.563 @10 0.433 0.594 0.647 0.39 0.508 0.620\nTable 4: Translation performance using the pseudo dictionary of identical character strings\nEnglish to Italian: Italian to English: Precision Mikolov Dinu This Mikolov Dinu This CCA CCA et al. et al. work et al. et al. work @1 0.010 0.060 0.291 0.399 0.025 0.115 0.270 0.343 @5 0.028 0.263 0.464 0.576 0.064 0.0317 0.470 0.566 @10 0.039 0.391 0.530 0.631 0.091 0.431 0.523 0.624\nTable 6: \"Translation\"' precision @1, when seeking to retrieve the true translation of an English. sentence from a bag of 20ok Italian sentences, or vice versa, averaged over 5k samples. We first obtain bilingual word vectors, using either the word dictionary provided by Dinu, or by constructing. a phrase dictionary from Europarl. We set ns = 12800 in the inverted softmax..\nsentences from Europarl. For simplicity, we do not apply dimensionality reduction here. Our results are provided in table 6] For precision @1, most approaches favour the phrase dictionary, while Dinu's method favours the word dictionary. We show in the appendix that all methods favour the. phrase dictionary for precision @5 and @10. Remarkably, given no information except the sentence vectors, we are able to retrieve the correct translation of an English sentence with 67.8% precision. This is particularly surprising, since we are using the simplest possible sentence vectors, which. have no information about word order or sentence length. It is likely that we could improve on these results if we used higher quality sentence vectors (Le & Mikolov2014] Kiros et al.][2015), although. we might lose the ability to simultaneously align the underlying word vector space..\nWhen training the inverted softmax, the inverse temperature diverged, and the \"translation' per. formance from English to Italian significantly exceeded the performance from Italian to English. This suggested that sentence retrieval from Italian to English might be achieved better by neares. neighbours, so we also evaluated the performance of nearest neighbour retrieval on the same or thogonal transformation, as shown in the third row of table|6 This improved the performance fron. Italian to English from 48.6% to 65.6%, which suggests that the optimal retrieval approach would. be able to tune continuously between the conventional softmax and the inverted softmax.."}, {"section_index": "6", "section_name": "4 SUMMARY", "section_text": "We proved that the optimal linear transformation between word vector spaces should be orthogonal. and can be obtained by a single application of the SVD on a dictionary of translation pairs, as pro. posed independently by Artetxe et al.[(2016). We used the SVD to obtain bilingual word vectors from which we can predict the translations of previously unseen words. We introduced a novel \"in. verted softmax\" which significantly increased the accuracy of our predicted translations. Combining. the SVD with the inverted softmax and dimensionality reduction, we improved the translation pre. cision of Mikolov's original linear mapping from 34% to 43%, when translating a test set composec. of both common and rare English words into Italian. This was achieved using a training dictionary. of 5k English words and their Italian translations. Replacing this training dictionary with a pseudo. dictionary acquired from the identical word strings that appear in both languages, we showed tha. we still achieved 40% precision, demonstrating that it is possible to obtain bilingual vector spaces. without an expert bilingual signal. Mikolov's method achieves just 1% precision here, emphasising. the superior robustness of orthogonal transformations. There are currently a number of approaches. to obtaining offline bilingual word vectors in the literature. Our work shows they can all be unified.\nFinally, we defined simple sentence vectors to obtain offline bilingual word vectors without a dic tionary using the Europarl corpus. We achieved 43% precision when translating our test set fron English into Italian under this approach, comparable to our results above, and competitive with on line approaches which use aligned text as the bilingual signal. We demonstrated that we can alsc use our sentence vectors to retrieve the true translation of an English sentence from a bag of 200l Italian candidate sentences with 68% precision, a striking result worthy of further investigation."}, {"section_index": "7", "section_name": "ACKNOWLEDGMENTS", "section_text": "We thank Dinu et al. for providing their source code, pre-trained word vectors, and a training and. test dictionary of English and Italian words, and Philipp Koehn for compiling the Europarl corpus\nEnglish to Italian: Italian to English: Word dictionary Phrase dictionary Word dictionary Phrase dictionary Mikolov et al.. 0.105 0.166 0.120 0.206 Dinu et al.. 0.453 0.406 0.489 0.459 SVD 0.268 0.431 0.473 0.656 + inverted softmax 0.546 0.678 0.429 0.486"}, {"section_index": "8", "section_name": "REFERENCES", "section_text": "Georgiana Dinu, Angeliki Lazaridou, and Marco Baroni. Improving zero-shot learning by mitigating the hubness problem. arXiv:1412.6568, 2014.\nAria Haghighi, Percy Liang, Taylor Berg-Kirkpatrick, and Dan Klein. Learning bilingual lexicon from monolingual corpora. In ACL, volume 2008, pp. 771-779, 2008\nYoon Kim. Convolutional neural networks for sentence classification. arXiv:1408.5882, 2014\nAlexandre Klementiev, Ivan Titov, and Binod Bhattarai. Inducing crosslingual distributed represen tations of words. 2012.\nPhilipp Koehn. Europarl: A parallel corpus for statistical machine translation. In MT summi volume 5, pp. 79-86, 2005.\nAng Lu, Weiran Wang, Mohit Bansal, Kevin Gimpel, and Karen Livescu. Deep multilingual corre lation for improved word embeddings. In Proceedings of NAACL, 2015..\nTomas Mikolov, Quoc V Le, and Ilya Sutskever. Exploiting similarities among languages for ma chine translation. arXiv:1309.4168, 2013a\nTomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed represen tations of words and phrases and their compositionality. In Advances in neural information pro cessing systems, pp. 3111-3119, 2013b.\nWilliam H Press. Canonical correlation clarified by singular value decomposition. 2011.\nWaleed Ammar, George Mulcaire, Yulia Tsvetkov, Guillaume Lample, Chris Dyer, and Noah A Smith. Massively multilingual word embeddings. arXiv:1602.01925, 2016.\nSarath Chandar, Stanislas Lauly, Hugo Larochelle, Mitesh Khapra, Balaraman Ravindran, Vikas C Raykar, and Amrita Saha. An autoencoder approach to learning bilingual word representations. In Advances in Neural Information Processing Systems, pp. 1853-1861, 2014.\nKarl Moritz Hermann and Phil Blunsom. Multilingual distributed representations without word alignment. arXiv preprint arXiv:1312.6173. 2013."}, {"section_index": "9", "section_name": "THE ORTHOGONAL PROCRUSTES PROBLEM", "section_text": "There is not an intuitive analytic solution to the cost function in equation|6l but an analytic solution does exist to the closely related \"orthogonal Procrustes problem', which minimises the squarec reconstruction error subject to an orthogonal constraint (Schonemann1966),\nn |y - Ox|, subject to OTO = I. min i=1\nHowever both X and Y are normalised, while O preserves the vector norm. We note that\nyi - Oxi|l |yi|2+|xi|2-2yfOx; X A -\nA is a constant, and so the cost functions given in equations6|and 13|are equivalent. We presented the solution of the orthogonal Procrustes problem in the main text.."}, {"section_index": "10", "section_name": "3 ADDITIONAL EXPERIMENTS", "section_text": "In tables7and[8] we provide results at precisions @5 and @10, for the same experiments shown @1 in table|6lof the main text. Once again, the inverted softmax performs well when retrieving the Italian translations of English sentences, but is less effective translating Italian sentences into English. However, the performance of Dinu's method appears to rise more rapidly than other methods as we transition from precision @1 to @5 to @10. Additionally, while Dinu's method performs better when using the word dictionary @1, it prefers the phrase dictionary @5 and @10.\nTable 7: \"Translation' precision @5, when seeking to retrieve the true translation of an English sentence from a bag of 20ok Italian sentences, or vice versa, averaged over 5k samples. We first obtain bilingual word vectors, using either the word dictionary provided by Dinu, or by constructing a phrase dictionary from Europarl. We set ns = 12800 in the inverted softmax.\nEnglish to Italian:n. Italian to English:. Word dictionary. Phrase dictionary Word dictionary. Phrase dictionary 0.187 0.272 0.221 0.326 Mikolov et al. Dinu et al.. 0.724 0.732 0.713 0.765 SVD 0.394 0.546 0.619 0.774 + inverted softmax 0.727 0.825 0.622 0.679\nTable 8: \"Translation\"' precision @10, when seeking to retrieve the true translation of an English sentence from a bag of 200k Italian sentences, or vice versa, averaged over 5k samples. We first obtain bilingual word vectors, using either the word dictionary provided by Dinu, or by constructing. a phrase dictionary from Europarl. We set ns = 12800 in the inverted softmax.."}] |
rJTKKKqeg | [{"section_index": "0", "section_name": "TRACKING THE E WORLD STATE WITH RECURRENT ENTITY NETWORKS", "section_text": "Mikael Henaffl,2, Jason Weston', Arthur Szlam', Antoine Bordes' and Yann LeCun1,2\nlFacebook AI Research"}, {"section_index": "1", "section_name": "1 INTRODUCTION", "section_text": "The essence of intelligence is the ability to predict. An intelligent agent must be able to predic1 unobserved facts about their environment from limited percepts (visual, auditory, textual, or other wise), combined with their knowledge of the past. In order to reason and plan, they must be able to predict how an observed event or action will affect the state of the world. Arguably, the ability to maintain an estimate of the current state of the world, combined with a forward model of how the world evolves, is a key feature of intelligent agents.\nA natural way for an agent to represent the world is to maintain a set of high-level concepts or entitie. together with their properties, which are updated as new information is received. For example, if. a percept is the textual description of an event, such as \"John walks out of the kitchen', the agen should learn to update its estimate of John's location, as well as the list (and number) of people. present in each room. If John was carrying a bag, the location of the bag and the list of objects ir. the kitchen must also be updated. When we read a story, each sentence we read or hear causes us tc update our internal representation of the current state of the world within the story. The flow of the story is captured by the evolution of this state of the world..\nAt any given time, an agent typically receives limited information about the state of the world, anc should therefore be able to infer new information through partial observation. In this paper, we investigate this problem through a simple story understanding scenario, in which the agent is giver a sequence of textual statements and events, and then given another series of statements about th final state of the world. If the second series of statements is given in the form of questions about th final state of the world together with their correct answers, the agent should be able to learn fron them and its performance can be measured by the accuracy of its answers."}, {"section_index": "2", "section_name": "ABSTRACT", "section_text": "We introduce a new model, the Recurrent Entity Network (EntNet). It is equipped with a dynamic long-term memory which allows it to maintain and update a rep- resentation of the state of the world as it receives new data. For language under- standing tasks, it can reason on-the-fly as it reads text, not just when it is required to answer a question or respond as is the case for a Memory Network (Sukhbaatar et al.2015). Like a Neural Turing Machine or Differentiable Neural Computer (Graves et al.]2014] 2016) it maintains a fixed size memory and can learn to perform location and content-based read and write operations. However, unlike those models it has a simple parallel architecture in which several memory loca- tions can be updated simultaneously. The EntNet sets a new state-of-the-art on the bAbI tasks, and is the first method to solve all the tasks in the 10k training examples setting. We also demonstrate that it can solve a reasoning task which requires a large number of supporting facts, which other methods are not able to solve, and can generalize past its training horizon. It can also be practically used on large scale datasets such as Children's Book Test, where it obtains competitive performance, reading the story in a single pass.\nEven with this weak form of supervision, the system may learn basic dynamical constraints about the. world. For example, it may learn that a person or object cannot be in two locations at the same time. or may learn simple update rules such as incrementing and decrementing the number of person or objects in a room. It may also learn basic rules of approximate (logical) inference, such as the. fact that objects belonging to the same category tend to have similar properties (light objects can be. carried over from rooms to rooms for instance)..\nWe propose to handle this scenario with a new kind of memory-augmented neural network tha. uses a distributed memory and processor architecture: the Recurrent Entity Network (EntNet). The. model consists of a fixed number of dynamic memory cells, each containing a vector key w; and. a vector value (or content) hj. Each cell is associated with its own \"processor\", a simple gated. recurrent network that may update the cell value given an input. If each cell learns to represent a. concept or entity in the world, one can imagine a gating mechanism that, based on the key and con. tent of the memory cells, will only modify the cells that concern the entities mentioned in the input In the current version of the model, there is no direct interaction between the memory cells, hence. the system can be seen as multiple identical processors functioning in parallel, with distributed lo. cal memory. Alternatively, the EntNet can be seen as a bank of gated RNNs (all sharing the same parameters), whose hidden states correspond to latent concepts and attributes, and whose parame. ters describe the laws of the world according to which the attributes of objects are updated. The. sharing of these parameters reflects an invariance of these laws across object instances, similarly tc. how the weight tying scheme in a CNN reflects an invariance of image statistics across locations. Their hidden state is updated only when new information relevant to their concept is received, and. remains otherwise unchanged. The keys used in the addressing/gating mechanism also correspond. to concepts or entities, but are modified only during learning, not during inference..\nThe EntNet is able to solve all 20 bAbI question-answering tasks (Weston et al.. 2015), a popula benchmark of story understanding, which to our knowledge sets a new state-of-the-art. Our ex periments also indicate that the model indeed maintains an internal representation of the simplifiec world in which the stories take place, and that the model does not limit itself to storing the aspects of the world required to answer a specific question. We also introduce a new reasoning task which. unlike the bAbI tasks, requires a model to use a large number of supporting facts to answer the. question, and show that the EntNet outperforms both LSTMs and Memory Networks (Sukhbaatal et al.[[2015] by a significant margin. It is also able to generalize to sequences longer than those seer during training. Finally, our model also obtains competitive results on the Childrens Book Test (Hil et al.] 2016), and performs best among models that read the text in a single pass before receiving. knowledge of the question."}, {"section_index": "3", "section_name": "2.1 INPUT ENCODER", "section_text": "The encoding layer summarizes an element of the input sequence with a vector of fixed length Typically the input element at time t is a sequence of words, e.g. a sentence or window of words One is free to choose the encoding module to be any standard sequence encoder, which is an active area of research. Typical choices include a bag-of-words (BoW) representation or the final state of a recurrent neural net (RNN) run over the sequence. In this work, we use a simple encoder consisting of a learned multiplicative mask followed by a summation. More precisely, let the input at time t be a sequence of words with embeddings e1, ..., ek }. The vector representation of this input is then:\nL fi O ei St =\nThe same set of vectors { f1, ..., fk} are used at each time step and are learned jointly with the other parameters of the model. Note that the model can choose to adopt a standard BoW representation\nOur model is designed to process data in sequential form, and consists of three main parts: an input encoder, a dynamic memory and an output layer, which we now describe in detail. We developed it in the context of question answering on short stories where the inputs are word sequences, but the model could be adapted to many other contexts.\nFigure 1: Diagram of the Recurrent Entity Network's dynamic memory. Update equations 1 and 2 are represented by the module fe, where 0 is the set of trainable parameters. Equations 3 and 4 are represented by the gate, since they fullfill a similar function..\nby setting all weights in the multiplicative mask to 1, or can choose a positional encoding model as used in (Sukhbaatar et al.]2015)."}, {"section_index": "4", "section_name": "2.2 DYNAMIC MEMORY", "section_text": "The dynamic memory is a gated recurrent network with a (partially) block structured weight tying. scheme. We divide the hidden states of the network into blocks h1, ..., hm; the full hidden state is the concatenation of the h;. In the experiments below, m is of the order of 5 to 20, and each blocl h; is of the order of 20 to 100 units..\nAt each time step t, the content of the hidden states {h} (which we will call the jth memory) are updated using a set of key vectors {w;} and the encoded input st. In its most general form, the update equations of our model are given by:.\nhj+ stWj gjfo h;<$(Uh;+Vw;+Wst h+gjO hj l|hj||\nHere o represents a sigmoid, gj is a gating function which determines how much the jth memory should be updated, and h; is the new candidate value of the memory to be combined with the existing memory hj. The function can be chosen from any number of activation functions, in our experiments we use either parametric ReLU non-linearities (He et al.2015) or the identity. The matrices U, V, W are typically trainable parameters of the model, and are shared between all the blocks. They can also be fixed to certain values, such as the identity or zero, to yield a simpler model which we use in some of our experiments.\nkey 6 6 update update gate gate memory slot key fe update update gate gate memory slot input S input St+1\nThe gating function g; contains two terms: a \"content' term sf'h, which causes the gate to open. for memory slots whose content matches the input, and a \"location' term sf w; which causes the. gate to open for memory slots whose key matches the input. The final normalization step allows the model to forget previous information. To see this, note that since the memories lie on the unit sphere, all information is contained in their phase. Adding any vector to a given memory (other than the memory itself) will decrease the cosine distance between the original memory and the updated one. Therefore, as new information is added, old information is forgotten..\nWhenever the model is required to produce an output, it is presented with a query vector q. Specifi cally, the output is computed using the following equations:\nPj = Softmax(qT Pjhj u = y = R$(q+ Hu)\nThe matrices H and R are additional trainable parameters of the model. The output module car. be viewed as a one-hop Memory Network (Sukhbaatar et al.f2015) with an additional non-linearity. between the internal state and the decoder matrix. If the memory slots correspond to specifi words (as we will describe in the following section) which contain the answer, p can be viewed as. a distribution over potential answers and can be used to make a prediction directly or fed into a loss. function, removing the need for the last two steps.."}, {"section_index": "5", "section_name": "MOTIVATING EXAMPLE OF OPERATION", "section_text": "We now describe a motivating example of how our model can perform reasoning on-the-fly as it is ingesting input sequences. Let us suppose our model is reading a story, so the inputs are natural language sentences, and then it is required to answer questions about the story it has just read.\nOur model is free to learn the key vectors w, for each memory j. One choice the model coulc make is to associate a single memory (via the key) with each entity in the story. The memory slot corresponding to a person could encode that person's location, the objects they are carrying or the people they are with, depending on what information is relevant for the task at hand. As new information is received indicating that objects are acquired or discarded, or the person changes location, their memory slot will change accordingly. Similarly useful updates can be made fo memories corresponding to object and location entities as well.\nIn fact, we could encode this choice of memories directly into our model, which we consider as a. type of prior knowledge. By tying the weights of the key vectors with the embeddings of specific. words, we can encourage the model to record information about certain words occuring in the text. which we believe to be important. For example, given a list of named entities (which could be. produced by a standard tagger), we could make the model have a separate memory slot for each entity. We consider this \"tied'' variant in our experiments. Since the list of entities is independent of the training data, this variant can handle entities not seen in the training set, as long as their. embeddings can be initialized in a reasonable way (such as pre-training on a larger corpus)..\nNow, consider that the model reads the following two sentences, and the desired behavior of the gating function and update function at each memory as they are seen:.\nMary picked up the ball. Mary went to the garden.\nThe entire model (all three components described above) is trained via backpropagation through. time, receiving gradients from any time steps where the reader is required to produce an output which are then propagated through the unrolled network..\nAs the first sentence st is ingested, and assuming memories encode entities, we would like the gate of the memories corresponding to both \"Mary\" and \"ball'' to activate. This is possible due to th location addressing term sf w; which uses the key ws. We expect that a well trained model woul learn to do this. The model would hence modify both the entry corresponding to \"Mary' to indicat that she is now carrying the ball, and also the entry corresponding to \"ball', to indicate that it i being carried by Mary. When the second sentence is seen, we would like the model to again modif the \"Mary'' entry to indicate that she is now in the garden, and also modify the \"ball' entry to reflec its new location as well. Assuming the information for \"Mary\" is contained in the \"ball'' memor as described before, the gate corresponding to \"ball' can activate due to the content addressin term s h, even though the word \"ball' does not occur in the second sentence. As before, the gat corresponding to the \"Mary' entry can open due to the second term.\nIf the gating function and update function have weights such that the steps above are executed, then. the memory will be in a state where questions such as \"Where is the ball?' or \"Where is Mary?\" can be answered from the values of relevant memories, without the need for further complex reasoning.\nThe EntNet is related to gated recurrent models such as the LSTM (Hochreiter & Schmidhuber 1997) and GRU (Cho et al.]2014), which also use gates to fix or modify the information storec in the hidden state. However, these models use scalar memory cells with full interactions betweer them, whereas ours has separate memory slots which could be seen as groups of hidden units witl tied weights in the gating and update functions. Another important difference is the content-basec matching term between the input and hidden state, which is not present in these models.\nOur model also shares some similarities with the DNC/NTM framework of (Graves et al.]2014 2016). There, as in our model, a block of hidden states acts as a set of read-writeable memories. Or the other hand, the DNC has a relatively sophisticated controller network (such as an LSTM) whic reads an input and outputs a number of interface vectors (such as keys and weightings) which ar then combined via a softmax to read from and write to the external memory matrix. In contrast, ou model can be viewed as a set of separate recurrent models whose hidden states store the memor slots. These hidden states are either fixed by the gates, or modified through a simple RNN-styl update. The bulk of the reasoning is thus performed by these parallel recurrent models, rather thai through a central controller. Moreover, instead of using a softmax, our model uses an independen gate for writing to each memory.\nOur model is similar to a Memory Network and its variants (Weston et al.]2014] Sukhbaatar et al. 2015Chandar et al.]2016] [Miller et al.][2016) in the way it produces an output using a softmax ove blocks of hidden states, and our encoding layer is inspired by techniques used in those works. How ever, Memory Networks explicitly store the entire input sequence in memory, and then sequentially update a controller's hidden state via a softmax gating over the memories. In contrast, our mode keeps a fixed number of blocks of hiddens as memories and updates each block with an independen gated RNN. The Dynamic Memory Network of (Xiong et al.]2016) also performs updates via a recurrent model, however it links memories to input tokens and updates them sequentially rathe than in parallel.\nThe weight tying scheme and the parallel gated RNNs recall the gated graph network of (Li et al.. 2015). If we interpret our work in that context, the \"graph' is just a set of vertices with no edges;. our gating mechanism is also somewhat different than the one they use. The CommNN model of (Sukhbaatar et al.]2016), the Interaction Network of (Battaglia et al.]2016), the Neural Physics. Engine of (Chang et al.[2016) and the model of (Fragkiadaki et al.2015) also use a set of par-. allel recurrent models with tied weights, but differ from our model in their use of inter-network communication and the lack of a gating mechanism.\nFinally, there is another class of recent models that have a writeable memory arranged as (un bounded) stacks, linked lists or queues (Joulin & Mikolov2015 Grefenstette et al.]2015). Our model is different from these in that we use a key-value pair array instead of a stack, and in the experiments in this work, the array is of fixed size.\nModel T = 10 T = 20 T = 40 T 20 30 40 50 60 70 80 IemN2N 0.09 0.633 0.896 Error 0 0 0 0.01 0.03 0.05 0. STM 0 0.157 0.226 ntNet 0 0 0 (a) (b)\nTable 1: a) Error of different models on the World Model Task. b) Generalization of an EntNet trained up to T = 20. All errors range from 0 to 1.\nIn this section we evaluate our model on three different datasets. Training details common to a experiments can be found in Appendix|A"}, {"section_index": "6", "section_name": "5.1 SYNTHETIC WORLD MODEL TASK", "section_text": "We first study our model's properties on a toy task designed to measure the ability to keep a worl model in memory. In this task two agents are initially placed randomly on an 10 10 grid, and at eacl time step a randomly chosen agent either changes direction or moves ahead. After a certain numbe of time steps, the model is required to provide the locations of each of the agents, thus revealing its internal world model (details can be found in AppendixB). This task is challenging because th model must combine up to T -- 2 supporting facts in order to answer the question correctly, and mus also keep the locations of both agents in memory and update them at different times.\nWe compared the performance of a MemN2N, LSTM and EntNet. For the MemN2N, we set the. number of hops equal to T - 2 and the embedding dimension to d = 20. The EntNet had embedding. dimension d = 20 and 5 memory slots, and the LSTM had 50 hidden units which resulted in it having. significantly more parameters than the other two models. For each model, we repeated the experi. ment with 5 different initializations and reported the best performance. All models were trained witl ADAM (Kingma & Ba2014) with initial learning rates set by grid search over {0.1, 0.01, 0.001}. and divided by 2 every 10,000 updates. Table 1a shows the results. The MemN2N has the worst. performance, which degrades quickly as the length of the sequence increases. The LSTM performs. better, but still loses accuracy as the length of the sequence increases. In contrast, the EntNet is able. to solve the task in all cases.."}, {"section_index": "7", "section_name": "5.2 BABI TASKS", "section_text": "We next evaluate our model on the bAbI tasks, which are a collection of 20 synthetic question. answering datasets first introduced in (Weston et al.[2015) designed to test a wide variety of rea soning abilities. They have since become a benchmark for memory-augmented neural networks and. most of the related methods described in Section 4 have been tested on them. Performance is mea sured using two metrics: the average error across all tasks, and the number of failed tasks (more than. 5% error). We used version 1.2 of the dataset with 10k samples..\nTraining Details We used a similar training setup as (Sukhbaatar et al. 2015).All models were trained with ADAM using a learning rate of n = 0.01, which was divided by 2 every 25 epochs until 200 epochs were reached. Copying previous works (Sukhbaatar et al.2015} Xiong et al.]2016) the capacity of the memory was limited to the most recent 70 sentences, except for task 3 which\nCode to reproduce these experiments can be found at\nhttps://github.com/facebook/MemNN/tree/master/EntNet-babi\nThe ability to generalize to sequences longer than those seen during training is a desirable property which suggests that the network has learned the dynamics of the world it is trying to model. It also means the model can be trained less expensively. To study this, we trained an EntNet on variable length sequences between 1 and 20, and evaluated it on different length sequences longer than 20. Results are shown in Table 1b. We see that the model is able to achieve good performance several times past its training horizon.\nTable 2: Results on bAbI Tasks with 1Ok training samples\nIn Table[2|we compare our model to various other state-of-the-art models in the literature: the larger. MemN2N reported in the appendix of (Sukhbaatar et al.]2015), the Dynamic Memory Network of (Xiong et al.[[2016), the Dynamic Neural Turing Machine (Gulcehre et al.|2016), the Neural Turing Machine (Graves et al.]2014) and the Differentiable Neural Computer (Graves et al.] 2016). Our model is able to solve all the tasks, outperforming the other models in terms of both the number of solved tasks and the average error.\nTo analyze what kind of representations our model can learn, we conducted an additional experi. ment on Task 2 using a simple BoW sentence encoding and key vectors which were tied to entity. embeddings. This was designed to make the model more interpretable, since the weight tying forces. memory slots to encode information about specific entities.|After training, we ran the model ove. a story and computed the cosine distance between (Hh;) and each row r; of the decoder matrix. R. This gave us a score which measures the affinity between a given memory slot and each worc. in the vocabulary. Table|3|shows the nearest neighboring words for each memory slot (which itseli. corresponds to an entity). We see that the model has indeed stored locations of all of the objects anc. characters in its memory slots which reflect the final state of the story. In particular, it has the correc. answer readily stored in the memory slot of the entity being inquired about (the milk). It also ha correct location information about all other non-location entities stored in the appropriate memory. slots. Note that it does not store useful or correct information in the memory slots corresponding tc.\n2For most tasks including this one, tying key vectors did not significantly change performance, although hurt in a few cases (see Appendix[C). Therefore we did not apply it in Table[2\n1ask N1M D-NTM MemN2N DNC DMN+ EntNet 1: 1 supporting fact 31.5 4.4 0 0 0 0 2: 2 supporting facts 54.5 27.5 0.3 0.4 0.3 0.1 3: 3 supporting facts 43.9 71.3 2.1 1.8 1.1 4.1 4: 2 argument relations 0 0 0 0 0 0 5: 3 argument relations 0.8 1.7 0.8 0.8 0.5 0.3 6: yes/no questions 17.1 1.5 0.1 0 0 0.2 7: counting 17.8 6.0 2.0 0.6 2.4 0 8: lists/sets 13.8 1.7 0.9 0.3 0.0 0.5 9: simple negation 16.4 0.6 0.3 0.2 0.0 0.1 10: indefinite knowledge 16.6 19.8 0 0.2 0 0.6 11: basic coreference 15.2 0 0.0 0 0.0 0.3 12: conjunction 8.9 6.2 0 0 0.2 0 13: compound coreference 7.4 7.5 0 0 0 1.3 14: time reasoning 24.2 17.5 0.2 0.4 0.2 0 15: basic deduction 47.0 0 0 0 0 0 16: basic induction 53.6 49.6 51.8 55.1 45.3 0.2 17: positional reasoning 25.5 1.2 18.6 12.0 4.2 0.5 18: size reasoning 2.2 0.2 5.3 0.8 2.1 0.3 19: path finding 4.3 39.5 2.3 3.9 0.0 2.3 20: agent's motivation 1.5 0 0 0 0 0 Failed Tasks (> 5% error): 16 9 3 2 1 0 Mean Error: 20.1 12.8 4.2 3.8 2.8 0.5\nwas limited to 130 sentences. Due to the high variance in model performance for some tasks, for each task we conducted 10 runs with different initializations and picked the best model based on performance on the validation set, as it has been done in previous work. In all experiments, our. model had embedding dimension size d = 100 and 20 memory slots..\nTable 3: On the left, the network's final \"world model' after reading the story on the right. First and second nearest neighbors from each memory slot are shown. along with their cosine distance\nKey 1-NN 2-NN football hallway (0.135) dropped (0.056) milk garden (0.111) took (0.011) john kitchen (0.501) dropped (0.027) mary garden (0.442) took (0.034) sandra hallway (0.394) kitchen (0.121) daniel hallway (0.689) to (0.076) bedroom hallway (0.367) dropped (0.075) kitchen kitchen (0.483) daniel (0.029) garden garden (0.281) where (0.026) hallway hallway (0.475) left (0.060)\nlocations, most likely because this task does not contain questions about locations (such as \"who is in the kitchen?')."}, {"section_index": "8", "section_name": "5.3 CHILDREN'S BOOK TEST (CBT)", "section_text": "We next evaluated our model on the Children's Book Test (Hill et al., 2016), which is a semantic language modeling (sentence completion) benchmark built from children's books that are freely. available from Project Gutenberg[] Models are required to read 20 consecutive sentences from a given story and use this context to fill in a missing word from the 21st sentence. More specifically. each sample consists of a tuple (S, q, C, a) where S is the story consisting of 20 sentences, Q is the 21st sentence with one word replaced by a special blank token, C is a set of 10 candidate answer. of the same type as the missing word (for example, common nouns or named entities), and a is the. true answer (which is always contained in C)..\nIt was shown in (Hill et al.]2016) that methods with limited memory such as LSTMs perform. well on more frequent, syntax based words such as prepositions and verbs, being similar to human. performance, but poorly relative to humans on more semantically meaningful words such as named. entities and common nouns. Therefore, most recent methods have been evaluated on the Named. Entity and Common Noun subtasks, since they better test the ability of a model to make use of wider contextual information.\nTraining Details We adopted the same window memory approach used in (Hill et al.]2016), wher each input corresponds to a window of text from {w(i-b-1/2) ...W;...w(i+(b-1)/2)} centered at a can didate w; E C. In our experiments we set b = 5. All models were trained using standard stochastic gradient descent (SGD) with a fixed learning rate of O.001. We used separate input encodings for th update and gating functions, and applied a dropout rate of 0.5 to the word embedding dimensions Key embeddings were tied to the embeddings of the candidate words, resulting in 10 hidden blocks one per member of C. Due to the weight tying, we did not need a decoder matrix and used the distribution over candidates to directly produce a prediction, as described in Section 3.\nResults We draw a distinction between two setups: the single-pass setup, where the model must read the story and query in order and immediately produce an output, and the multi-pass setup, where the model can use the query to perform attention over the story. The first setup is more challenging\nWe found that a simpler version of the model worked best, with U = V = 0, W = I and equal. to the identity. We also removed the normalization step in this simplified model, which we found to hurt performance. This can be explained by the fact that the maximum frequency baseline model in (Hill et al.|2016) has performance which is significantly higher than random, and including the. normalization step hides this useful frequency-based information..\nTable 4: Accuracy on CBT test set. Single-pass models encode the document before seeing th query, multi-pass models have access to the query at read time..\nIn Table4] we show the performance of the general EntNet, the simplified EntNet, as well as othe. single-pass models taken from (Hill et al.]2016). The general EntNet performs better than th. LSTMs and n-gram model on the Named Entities Task, but lags behind on the Common Nouns task. The simplified EntNet outperforms all other single-pass models on both tasks, and also perform. better than the Memory Network which does not use the self-supervision heuristic. However, ther is still a performance gap when compared to more sophisticated machine comprehension models many of which perform multiple layers of attention over the story using query knowledge. The fac. that the simplified EntNet is able to obtain decent performance is encouraging since it indicates tha. the model is able to build an internal representation of the story which it can then use to answer relatively diverse set of queries.\nTwo closely related challenges in artificial intelligence are designing models which can maintain an estimate of the state of a world with complex dynamics over long timescales, and models which can predict the forward evolution of the state of the world from partial observation. In this paper, we introduced the Recurrent Entity Network, a new model that makes a promising step towards the first goal. Our model is able to accurately track the world state while reading text stories, which enables it to set a new state-of-the-art on the bAbI tasks, the competitive benchmark of story understanding by being the first model to solve them all. We also showed that our model is able to capture simple dynamics over long timescales, and is able to perform competitively on a real-world dataset.\nAlthough our model was able to solve all the bAbI tasks using 10k training samples, we found that performance dropped considerably when using only 1k samples (see Appendix). Most recent work on the bAbI tasks has focused on the 10k samples setting, and we would like to emphasize that solving them in the 1k samples setting remains an open problem which will require improving the sample efficiency of reasoning models, including ours..\nRecent works have made some progress towards the second goal of forward modeling, for instance in capturing simple physics (Lerer et al.]2016), predicting future frames in video (Mathieu et al. 2015) or responses in dialog (Weston2016). Although we have only applied our model to tasks\nModel Named Entities Common Nouns Kneser-Ney Language Model + cache 0.439 0.577 LSTMs (context + query) 0.418 0.560 Single Pass Window LSTM 0.436 0.582 EntNet (general) 0.484 0.540 EntNet (simple) 0.616 0.588 MemNN 0.493 0.554 MemNN + self-sup 0.666 0.630 Attention Sum Reader (Kadlec et al.2016 0.686 0.634 Multi Pass Gated-Attention Reader (Bhuwan Dhingra & Salakhutdinov,2016 0.690 0.639 EpiReader [Trischler et aI.2016] 0.697 0.674 AoA Reader (Cu1 et al.2016 0.720 0.694 NSE Adaptive Computation (Munkhdalai & Yu)2016 0.732 0.714\nbecause the model does not know beforehand which query it will be presented with, and must learn. to retain information which is useful for a wide variety of potential queries. For this reason it can be. viewed as a test of the model's ability to construct a general-purpose representation of the current. state of the story. The second setup leverages all available information, and allows the model to use knowledge of which question will be asked when it reads the story..\nwith textual inputs in this work, the architecture is general and future work should investigate hot to combine the EntNet's tracking abilities with such predictive models.\nChandar, Sarath, Ahn, Sungjin, Larochelle, Hugo, Vincent, Pascal, Tesauro, Gerald, and Bengio Yoshua. Hierarchical memory networks. arXiv preprint arXiv:1605.07427, 2016\nHe, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Delving deep into rectifiers: Surpass ing human-level performance on imagenet classification. CoRR, abs/1502.01852, 2015\nBattaglia, Peter W., Pascanu, Razvan, Lai, Matthew, Rezende, Danilo Jimenez, and Kavukcuoglu. Koray. Interaction networks for learning about objects, relations and physics.. CoRR, abs/1612.00222, 2016. URL http://dblp.uni-trier.de/db/journals/corr/ corr1612.html#BattagliaPLRK16\nCho, Kyunghyun, van Merrienboer, Bart, Bahdanau, Dzmitry, and Bengio, Yoshua. Onthe properties of neural machine translation: Encoder-decoder approaches. In Proceedings of SSST@ EMNLP 2014, Eighth Workshop on Syntax, Semantics and Structure in Statistical Trans- lation, Doha, Qatar, 25 October 2014, pp. 103-111, 2014. URLhttp://aclweb.org/ anthology/w/w14/w14-4012.pdf\nGraves, Alex, Wayne, Greg, Reynolds, Malcolm, Harley, Tim, Danihelka, Ivo, Grabska-Barwinska.. Agnieszka, Colmenarejo, Sergio Gomez, Grefenstette, Edward, Ramalho, Tiago, Agapiou, John et al. Hybrid computing using a neural network with dynamic external memory. Nature, 2016.\nKingma, Diederik P. and Ba, Jimmy. Adam: A method for stochastic optimization. CoRR, abs/1412.6980,2014. URLhttp://arxiv.0rg/abs/1412.6980\nLerer, Adam, Gross, Sam, and Fergus, Rob. Learning physical intuition of block towers by exam. ple. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016 New York City, NY, USA, June 19-24, 2016, pp. 430-438, 2016. URLhttp://jmlr.org/ proceedings/papers/v48/1erer16.htm1\nLi, Yujia, Tarlow, Daniel, Brockschmidt, Marc, and Zemel, Richard S. Gated graph sequence neural networks. CoRR, abs/1511.05493, 2015. URLhttp://arxiv.org/abs/1511.05493\nMiller, Alexander, Fisch, Adam, Dodge, Jesse, Karimi, Amir-Hossein, Bordes, Antoine, and We ston, Jason. Key-value memory networks for directly reading documents. arXiv preprin arXiv:1606.03126, 2016.\nMunkhdalai, Tsendsuren and Yu, Hong. Reasoning with memory augmented neural networks for language comprehension. CoRR, abs/1610.06454, 2016. URLhttps: / /arxiv. org/abs/ 1610.06454\nTrischler, Adam, Ye, Zheng, Yuan, Xingdi, and Suleman, Kaheer. Natural language comprehensio. with the epireader. CoRR, abs/1606.02270, 2016. URLhttp://arxiv.org/abs/1606 02270\nXiong, Caiming, Merity, Stephen, and Socher, Richard. Dynamic memory networks for visual anc textual question answering. In ICML, 2016."}, {"section_index": "9", "section_name": "A TRAINING DETAILS", "section_text": "All models were implemented using Torch (Collobert et al.|[2011). In all experiments, we initialize. our model by drawing weights from a Gaussian distribution with mean zero and standard deviatior. 0.1, except for the PReLU slopes and encoder weights which were initialized to 1. Note that th. PReLU initialization is related to two of the heuristics used in (Sukhbaatar et al.2015), namel. starting training with a purely linear model, and adding non-linearities to half of the hidden units. Our initialization allows the model to choose when and how much to enter the non-linear regime. Initializing the encoder weights to 1 corresponds to beginning with a BoW encoding, which th. model can then choose to modify. The initial values of the memory slots were initialized to the ke values, which we found to help performance. Optimization was done with SGD or ADAM usin. minibatches of size 32, and gradients with norm greater than 40 were clipped to 40. A null symbc. whose embedding was constrained to be zero was used to pad all sentences or windows to a fixe. SiZe."}, {"section_index": "10", "section_name": "8 DETAILS OF WORLD MODEL EXPERIMENTS", "section_text": "Two agents are initially placed at random on a 10 10 grid with 100 distinct locations {(1, 1), (1, 2),...(9, 10), (10, 10)}. At each time step an agent is chosen at random. There are twc types of actions: the agent can face a given direction, or can move a number of steps ahead. Actions are sampled until a legal action is found by either choosing to change direction or move with equa. probability. If they change direction, the direction is chosen between north, south, east and west with equal probability. If they move, the number of steps is randomly chosen between 1 and 5. A lega action is one which does not place the agent off the grid. Stories are given to the network in textua form, an example of which is below. The first action after each agent is placed on the grid is to face a given direction. Therefore, the maximum number of actions made by one agent is T - 2. The network learns word embeddings for all words in the vocabulary such as locations, agent identifiers and actions. At question time, the model must predict the correct answer (which will always be a location) from all the tokens in the vocabulary.\nWe provide some additional experiments on the bAbI tasks, in order to better understand the influ- ence of architecture, weight tying, and amount of training data. Table|5|shows results when a simple BoW encoding is used for the inputs. Here, the EntNet still performs better than a MemN2N which uses the same encoding scheme, indicating that the architecture has an important effect. Tying the key vectors to entities did not help, and hurt performance for some tasks. Table|6|shows results when using only 1k training samples. In this setting, the EntNet performs worse than the MemN2N.\nTable 5: Error rates on bAbI Tasks with inputs are encoded using BoW. \"Tied\" refers to the case where key vectors are tied with entity embeddings.\nMemN2N EntNet-tied EntNet\n1: 1 supporting fact. 0 0 0 2: 2 supporting facts. 0.6 3.0 1.2 3: 3 supporting facts 7 9.6 9.0 4: 2 argument relations 32.6 33.8 31.8 5: 3 argument relations. 10.2 1.7 3.5 6: yes/no questions. 0.2 0 0 7: counting 10.6 0.5 0.5 8: lists/sets 2.6 0.1 0.3 9: simple negation 0.3 0 0 10: indefinite knowledge 0.5 0 0 11: basic coreference. 0 0.3 0 12: conjunction 0 0 0 13: compound coreference 0 0.2 0.4 14: time reasoning 0.1 6.2 0.1 15: basic deduction 11.4 12.5 12.1 16: basic induction. 52.9 46.5 0 17: positional reasoning. 39.3 40.5 40.5 18: size reasoning. 40.5 44.2 45.7 19: path finding 74.4 75.1 74.0 20: agent's motivation. 0 0 0 Failed Tasks (> 5%): 9 8 6 Mean Error:. 15.6 13.7 10.9\nTask MemN2N EntNet 1: 1 supporting fact 0 0.7 2: 2 supporting facts 8.3 56.4 3: 3 supporting facts 40.3 69.7 4: 2 argument relations 2.8 1.4 5: 3 argument relations 13.1 4.6 6: yes/no questions 7.6 30.0 7: counting 17.3 22.3 8: lists/sets 10.0 19.2 9: simple negation 13.2 31.5 10: indefinite knowledge 15.1 15.6 11: basic coreference 0.9 8.0 12: conjunction 0.2 0.8 13: compound coreference 0.4 9.0 14: time reasoning 1.7 62.9 15: basic deduction 0 57.8 16: basic induction 1.3 53.2 17: positional reasoning 51.0 46.4 18: size reasoning 11.1 8.8 19: path finding 82.8 90.4 20: agent's motivation 0 2.6 Failed Tasks (> 5%): 11 15 Mean Error: 13.9 29.6\nTable 6: Results on bAbI Tasks with 1k samples"}] |